Age | Commit message (Collapse) | Author | Files | Lines |
|
This is handled in puppet-tripleo instead so we can remove the
hard-coded reference to ControllerCount and instead use the
hiera neutron_api_node_names to derive the number of neutron API
nodes regardless of roles.
Note that the NeutronL3HA parameter is maintained despite being
marked deprecated because we need to backport this bugfix so we
can't just remove it. I'm not sure if we want to consider removing
the deprecation as leaving the override parameter in place seems
fairly low overhead.
Closes-Bug: #1629187
Change-Id: I7a77836dcaf809cc7959fca7691a4cd7d4af5d6a
Depends-On: I01c50973eec8138ec61304f2982d5026142f267c
|
|
|
|
|
|
|
|
|
|
ceph::profile::params::manage_repo should default to false when
using external Ceph.
Overcloud Ceph clients use Ceph packages, which may be provided by
the 'ceph' metapackage, but not for all repos, see related bug. So,
this change also includes a list of packages as a workaround as
used in change Ie55d22301dd22102d471e6002dfcaad4bfadd5f6.
Change-Id: I338e51637aa39d3f7bbbad0263740f728d42cb9b
Closes-bug: 1641989
Related-Bug: 1629933
|
|
Instead of relying on an explicit hiera call to get the stack domain
password, this uses the keystone parameter to introduce that value
instead.
Change-Id: I0e5124d57fdc519262fdec2dbeaaac85afaeebdf
|
|
This patch resolves an issue with nova-base.yaml that prevents
it from working with the new heat hiera agent hook (which
uses Json instead of Yaml).
It updates the service so that we only set the upgrade level if it
is not an empty string.
Partial-bug: #1596373
Change-Id: I595f2e16c33a6f935c7ca8935fec445d19c7b8f3
|
|
This patch resolves a few issues I noticed when porting our
Horizon service to support the new heat hiera agent hook (which
uses Json instead of Yaml).
-we only need to set django_debug if the string is non-empty. This
should match previous behavior.
-remove the duplicated NeutronMechanismDrivers setting. This is already
managed in the neutron services and shouldn't be set here.
Change-Id: I473e110bb9b14cb8f57d41c4fc398871548726b0
Partial-bug: #1596373
|
|
|
|
|
|
|
|
|
|
|
|
https://review.openstack.org/#/c/388688/ has removed ceilometer-dbsync so
ceilometer-upgrade must be used instead.
Additionally, ceilometer-dbsync enabled option --skip-gnocchi-resource-types
and ceilometer-upgrade doesn't, so i'm setting it by default to ensure backwards compatibility.
Note this is based on the corresponding fix to puppet-ceilometer ref
https://review.openstack.org/#/c/396570
Change-Id: Ic0a15c75d1cd3e3f70eeafd9ba09d50c58cc1293
Closes-Bug: #1641076
|
|
Deployments using external LB will file like this:
deploy_stderr: |
+ RESTART_FOLDER=/var/lib/tripleo/pacemaker-restarts
+ [[ -d /var/lib/tripleo/pacemaker-restarts ]]
++ systemctl is-active haproxy
+ haproxy_status=unknown
deploy_status_code: 3
openstack software deployment show 4f339ca4-7600-4ca0-b0ef-f798bc47b6cf
The reason is that via https://review.openstack.org/#/c/393644/ we
introducted the haproxy restart like this:
haproxy_status=$(systemctl is-active haproxy)
if [ "$haproxy_status" = "active" ]; then
systemctl reload haproxy
fi
The problem is that if haproxy is not running/installed systemctl
is-active can fail and the script will terminate with an error return
code. Let's just move the call inside the if so the script does not fail
in case haproxy is not there.
The snippet before the change (on a system without haproxy installed):
[root@mrg-09 tmp]# ./test.sh
++ systemctl is-active haproxy
+ haproxy_status=unknown
[root@mrg-09 tmp]# echo $?
3
After this change:
[root@mrg-09 tmp]# ./test.sh
++ systemctl is-active haproxy
+ '[' unknown = active ']'
[root@mrg-09 tmp]# echo $?
0
Change-Id: I837c63a9dbcde8c922f843c442974fa79cf1eede
Closes-Bug: #1641904
|
|
In order to eventually enable fernet tokens for keystone, we need to be
specify the token provider. This change codifies the current default
used by TripleO of uuid tokens and fernet token setup disabled.
Change-Id: I7c03ed7b6495d0b9a57986458d020b3e3bf7224a
Closes-Bug: #1641763
|
|
|
|
|
|
In ocata we changed the ha policy to "ha-exactly" via the following changes:
- tht: Iace6daf27a76cb8ef1050ada0de7ff1f530916c6
- puppet-tripleo: Ib62001c03e1e08f58cf0c6e0ba07a8879a584084
We initially also took care of changing this policy (which is set in the
pacemaker resource agent) for the M/N upgrade path:
I2468a096b5d7042bc801a742a7a85fb1521c1c02
In the end we decided against changing the policy in Newton as well (it
was only for ocata) as it was too close to the release date and we took
the safer path.
This patch does two things:
1) It renames the upgrade function to "newton_ocata" since that is the
only upgrade path we need to take care of
2) It reinstates the actual upgrade function which was mistakenly
removed via an unrelated change in the ceilometer upgrade path:
If9d6987cd0a8fc5d3f9de518ba422d97d5149732
Closes-Bug: #1628998
Change-Id: I3a97505d2ae1ae27f3080ffe74c33fdabffd2420
|
|
|
|
This adds the necessary hieradata for enabling TLS in the internal
network for Barbican API.
bp tls-via-certmonger
Depends-On: I1c1d3dab9bba7bec6296a55747e9ade242c47bd9
Change-Id: Ib100faa9dc222f836695a0e8f6e101dc7637d1d6
|
|
|
|
|
|
|
|
|
|
|
|
Currently OVS tunnel firewall rules are held within the neutron ovs
agent service heat template. That service is not used with ODL, so
consequently ODL was missing the VXLAN and GRE firewall rules and
traffic would not pass between nodes. This adds the missing rules to
the OpenDaylight OVS service.
Closes-Bug: 1641191
Change-Id: Icfd7db6a3e8fcdd02646fb7e413f40f26b03b994
Signed-off-by: Tim Rozet <trozet@redhat.com>
|
|
When the civetweb binding IP is version 6 it needs to be enclosed
in brackets or the bind socket parsing fails. The mangling happens
in puppet-tripleo, this change updates the templates to push the
appropriate hiera keys.
Change-Id: Ic7004d768ed5e0f2382ffaa57961ea0ef9162527
Closes-Bug: #1636515
Depends-On: Ib84fa3479c2598bff7e89ad60a1c7d5f2c22c18c
|
|
|
|
We are noticing several tests failing in our low memory environment
because of timeout in neutron requests.
As an example the test
tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON
fails because it requests to plug a vif, and send request to neutron,
which responds in more than neutron_url_timeout, and since the option
vif_plugging_is_fatal is set to True as default, the test fails.
Shortly thereafter, checking in neutron log you can see the request,
returning with the proper status, after more than neutron_url_timeout,
however, it's already too late once nova already marked the instance
with error status, and so the test fails.
Closes-Bug: #1641135
Change-Id: If0991c114f199490ac0deb71eb569a42d4711359
|
|
By default sensu-puppet is overring default list of varibles which should
be redacted. This patch enables to configure redact list and uses default
value given by [1]. This patch also serves as a workaround until [2]
is merged in the module itself (or in case it won't get merged).
[1] https://sensuapp.org/docs/0.24/reference/clients.html
[2] https://github.com/sensu/sensu-puppet/pull/580
Closes-Bug: #1641080
Closes-Bug: rhbz#1392473
Change-Id: I21201f734d2fbf5f571091603126cf11cfdd8c40
|
|
|
|
|
|
|
|
|
|
|
|
The capitalization of OS::Tripleo is wrong compared to all other services
so correct this for avoidance of confusion when folks write custom roles_data
files or pass custom service lists via *Services parameters.
Change-Id: Ib73c80871b45586edb5774e90280ff89fc0d9895
Closes-Bug: 1640871
|
|
Closes-Bug: rhbz#1392428
Closes-Bug: #1640834
Change-Id: I2a1a869493ccb4c8d5b9aea26b8ef947750d2cfe
|
|
|
|
This patch resolves a few issues I noticed when porting our
Neutron L3 service to support the new heat hiera agent hook (which
uses Json instead of Yaml).
- If NeutronExternalNetworkBridge is an emptry string '' Json was
dropping the single quotes thus causing the bridge to get set
incorrectly in the config file. To correct this we use a heat
conditional to avoid setting the external bridge (the '' default
is what we want in this case) if the bridge is an empty string.
Change-Id: I5037cbde6b76a37a4c22c4616278420e9d759109
Partial-bug: #1596373
|
|
This patch updates the Yaql expressions that work on role_data
so that they evaluate properly when the get_attr for role_data
is null.
I hit issues using this for the heat undercloud installer and this
seems to resolve them.
Change-Id: I0493d0525cd3ad280339f26ef9d3aa311af9962e
|
|
Modify the syntax used to access the ResourceGroup attributes so we
always select the first node from the group, e.g even if the node
named "0" in the ResourceGroup nested stack has been removed due to
the removal policy.
Change-Id: I8b1c9538976a1518b220187a0034ad41a738d5a6
Closes-Bug: #1640449
|
|
|
|
When the manila api service is deployed
on a different role than the controller the
iptables rules on that role fail to ACCEPT
tcp at the manila API ports.
Add tripleo.manila_api.firewall_rules to
the relevant puppet services module.
Change-Id: I1c5459f5ba989657fd99fd72c7ac9f8781cc7206
Closes-Bug: #1640568
|
|
|
|
|
|
|
|
To improve security, we should disable the password reveal option in
horizon by default. An end user can override this options via their own
custom hiera if they would ultimately like to have this functionality.
Change-Id: Ie88dac5610840eb4b327252b32dc469099ba5f5f
Depends-On: Iacf899d595a2a3c522df1b96ca527731937ec698
Closes-Bug: 1640492
|
|
Currently when we call the major-upgrade step we do the following:
"""
...
if [[ -n $(is_bootstrap_node) ]]; then
check_clean_cluster
fi
...
if [[ -n $(is_bootstrap_node) ]]; then
migrate_full_to_ng_ha
fi
...
for service in $(services_to_migrate); do
manage_systemd_service stop "${service%%-clone}"
...
done
"""
The problem with the above code is that it is open to the following race
condition:
1. Code gets run first on a non-bootstrap controller node so we start
stopping a bunch of services
2. Pacemaker notices will notice that services are down and will mark
the service as stopped
3. Code gets run on the bootstrap node (controller-0) and the
check_clean_cluster function will fail and exit
4. Eventually also the script on the non-bootstrap controller node will
timeout and exit because the cluster never shut down (it never actually
started the shutdown because we failed at 3)
Let's make sure we first only call the HA NG migration step as a
separate heat step. Only afterwards we start shutting down the systemd
services on all nodes.
We also need to move the STONITH_STATE variable into a file because it
is being used across two different scripts (1 and 2) and we need to
store that state.
Co-Authored-By: Athlan-Guyot Sofer <sathlang@redhat.com>
Closes-Bug: #1640407
Change-Id: Ifb9b9e633fcc77604cca2590071656f4b2275c60
|