Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
This change will fix a logic error when L3 agent was disabled, where
a Pacemaker constraint (neutron-dhcp-agent-to-l3-agent-constraint) were
still looking for l3_agent_service in the Puppet catalog, but could not
because L3 agent was disabled.
It was sending this Puppet error:
Error: Could not find dependency
Pacemaker::Resource::Service[neutron-l3-agent] for
Pacemaker::Constraint::Base[neutron-dhcp-agent-to-l3-agent-constraint]
Change-Id: I0e5d24d844810c58a3205303399d1c20773af3dd
|
|
|
|
|
|
This patch adds GlanceRegistry to the endpoint map. This
will make accessing Glance registry setings via the endpoint
map possible.
Change-Id: I9186e56cd4746a60e65dc5ac12e6595ac56505f0
|
|
There are backwards incompatible patches [1][2] in puppet-cinder which break
upgrade scenarios, but at this point they made it to liberty and mitaka,
so workaround in t-h-t is easier than revert
[1] https://review.openstack.org/#/c/209412/
[2] https://review.openstack.org/#/c/231068/
Change-Id: Ic82258bf0893ebd4e595e5df73ffbc4c6443f9e8
Closes-Bug: #1570265
|
|
This part in overcloud_controller_pacemaker.pp has a lot of duplicate
code to define haproxy and vip creation. This is an attempt to refactor
this.
Change-Id: Icbd560de08999e48cfb54c6f3c94f8b96cddd6ba
Depends-On: I4cc6711911c1bfa1bc6063979e2b2a7ab5b8d37b
|
|
Previously ceilometer-notification, aodh-listener and sahara-engine
didn't have constraints that would anchor them under openstack-core
dummy resource. Such constraints are added now. (sahara-engine starting
after sahara-api, aodh-listener after aodh-evaluator, and
ceilometer-notification after openstack-core.) Openstack-core ->
heat-api constraint has been removed because heat-api depends on
ceilometer-notification, so there's a transitive dependency on
openstack-core already.
Change-Id: Ided7321ebbf2c3556726343b4bb466fd8759b43a
Closes-Bug: #1569444
|
|
In the environments/ subdirectory of tripleo-heat-templates, we mostly
use parameter_defaults, but some of the environment files still use
parameters. This can lead to confusing behavior with respect to
parameter priority when passing environment files to deploy/update
commands. Users might expect that subsequent environment files take
priority over preceding ones, but that might not be the case if the
preceding environment files use `parameters`, while the subsequent ones
use `parameter_defaults`.
This commit switches all `parameters:` uses in environments/
subdirectory to `parameter_defaults:`.
Change-Id: Ie4c03c7e7f5a5004a0384d35817135f357e9719b
Closes-Bug: #1567837
|
|
* Deploy Gnocchi API.
* Storage backends: swift, rbd and file.
* Indexer backend default to mysql
* Configure Ceilometer to send metrics datas to Gnocchi
* Pacemaker config
Depends-On: Ic8778a3104e0ed0460423e4bf857682220dc5802
Depends-On: I7d2eb9405e0171fc54fa0b616122f69db5f51ce2
Co-Authored-By: Pradeep Kilambi <pkilambi@redhat.com>
Change-Id: Ifde17b1ab8fa2b30544633e455e1c7eb475705aa
|
|
|
|
|
|
|
|
The change in ab068a824ed51e78bf111387223e58e885ec5c84 is described as
temporary, so it would be better if it did not affect the EndpointMap
parameter (which is effectively a public interface, since it may be
overridden in an environment file). No configuration should end up with
different ports/protocols/hosts for Keystone v2 and v3, and somebody
customising them should not have to account for them separately. Nor
should things break when the need to distinguish between v2 and v3
endpoints goes away.
This change removes the KeystoneV3* keys from the EndpointMap input and
uses the Keystone* keys instead, so that any change to the internal
organisation becomes transparent to the user.
Change-Id: If4cdd9232f4dbc9f2af651bbdfe68f09dc26ed2e
|
|
|
|
|
|
Adds new puppet and puppet pacemaker specific services for
Keystone.
The puppet manifests for keystone now live in puppet-tripleo.
Hiera settings are driven by the nested stack heat templates
and used to control puppet-keystone and puppet-tripleo
directly.
The Pacemaker template extends the default keystone service and
swaps in the pacemaker specific puppet-tripleo profile instead.
Change-Id: I8b30438a27e9d5ec4e7d335e0bd1a931a20b03a2
Depends-On: I2faf5a78db802549053ec41678bf83bf28108189
|
|
|
|
Removes the old noop nested stack template for extraconfig
tasks and instead uses OS::Heat::None. This should avoid a few
extra resource checks on create and update.
Change-Id: I5a42fc78ece2553e86385236e214aa1e3c91cd85
|
|
Removes the old noop nested stack template for networks and
instead uses OS::Heat::None instead. This should avoid a few
extra resource checks on create and update.
Change-Id: Ia3d7f62dbda2705ffc3d9edcddebcd3ece3cc9d2
|
|
Create the glance-fs Pacemaker resource on one node (pacemaker master)
instead of all nodes, and set verify_on_create to True.
* It will avoid a race condition if Puppet is applied on 2 nodes on the
same time, so the filesystem is attempted to be created once.
* Verify with psc that the resource has been correctly created.
The full context of the bug is decribed here:
https://bugzilla.redhat.com/show_bug.cgi?id=1319384
Change-Id: I625f0879ae56e814664d1433ae47e27148779f12
|
|
The change at https://review.openstack.org/#/c/302352/ should stop
the if up/down scripts from making changes to resolv.conf as
discussed in that review and the related bug below. However during
upgrades, as we are moving from a version of the ifcfg-vlanXX files
that don't have the PEERDNS=no added by /#/c/302352 the if up
script will restore the /etc/resolv.conf.save to /etc/resolv.conf
and overwrite it. This removes the .save file during the upgrade
init command which gets delivered to all nodes as the first stage
of a major upgrade.
Change-Id: I91dd139f43be4912c20d8661691bee2b662964d4
Related-Bug: 1567004
|
|
This reverts commit 570c690bfb118e0cf130b7dbed7992676519ed9b.
This patch broke the ping_gateway_function when using IPv6
network isolation.
Change-Id: I57850a527804f2e753270fd9063d119d41a83b17
Closes-bug: #1567011
|
|
By default only the admin user key is generated and this key is used
for both admin and openstack user.
Because the mode of the client's key file is 644, any user with a
valid shell on the controller/compute/ceph nodes can made admin
operations on the ceph cluster.
This patch allows to used the random key generated by tripleoclient
for the openstack user.
Change-Id: I771bbee81c0acfe593e92a99ad12d6f1f7f445ef
Closes-bug: #1566927
Depends-On: I404665c09084f0a6cd2d8872940ee90220dc5f69
|
|
This might prevent dropping members from corosync cluster on high load
environments. Symptoms of this problem happening can sometimes be found
in corosync log:
dub 05 17:23:45 overcloud-controller-0 corosync[14152]: [MAIN ] Corosync
main process was not scheduled for 3691.8391 ms (threshold is 1320.0000
ms). Consider token timeout increase.
The default in the Puppet manifest is 1 second, which matches the
corosync default, and we override it with hiera to 10 seconds.
Change-Id: I5ea850ada657e5eecafa3e8b28613a0ac48e78f3
|
|
|
|
|
|
|
|
|
|
Right now, the service-related IPs assosiated with the machine are
registered in the /etc/hosts with different hostnames. This is fine,
except if you need to register that hostname in a third party service
(such as FreeIPA), since the current configuration is not assigning a
domain to those IP addresses. So the current implementation requires
DNS to be properly working, which is not ideal for testing purposes.
Since the current hostnames are not currently being used; it's still
trivial to change this mapping and the format of them. instead of
having entries such as:
<INTERNAL IP> <node>-internalapi
<STORAGE IP> <node>-storage
...
in /etc/hosts; This changes the format to:
<INTERNAL IP> <node>.internalapi.<domain> <node>.internalapi
<STORAGE IP> <node>.storage.<domain> <node>.storage
...
So the network (external, internal, storage, etc...) is now
represented as a subdomain. For simplicity, the format without the
domain is still available through an alias.
Change-Id: I6502959a974546e5de757935acea15df6326acda
|
|
Kolla has been using ceph. For a while, cinder had
iscsi build into it, but it was removed. In order to
get this to work with containers again, nova-compute and
libvirt containers need /dev and /lib/udev mounted into their
containers.
We also need to copy nova's rootwrap.conf into the nova.
It was missing this config file.
Change-Id: Ie77f56b4576d5393ad3756b0f5ecc3eeff844d1f
|
|
While having extra customizations inside a TripleO deployed
Pacemaker environment, say you have instance HA with
pacemaker_remoted or you need to configure an external arbitrator
for something, then the status of the resources for remote nodes
is "Stopped".
This leads to failures while, for example, scaling up.
This fixes the way status is checked, filtering just local nodes.
Co-Authored-By: Giulio Fidente <gfidente@redhat.com>
Change-Id: I8dc25f5d7031c265858afd5a266fda5315ae37a0
|
|
|
|
|
|
If a certificate expires, the user will need to update it. However,
because we only restart services at the end of a stack-update the
new certificate doesn't take effect until after puppet has run.
This is a problem because puppet makes OpenStack calls, which will
fail if the certificate is expired. In that case we never get to
the service restart so the stack is wedged until the user manually
restart haproxy.
This patch addresses the problem by reloading haproxy before puppet
runs. This is done in a pre-puppet script for pacemaker after pacemaker
is maintenance mode because we need to make sure it happens after all of
the certs have been installed on the controllers, but before puppet
runs.
For non-pacemaker, haproxy is simply reloaded.
Change-Id: Id5ed05b3a20d06af8ae7a3d6f859b03399b0d77d
|
|
|
|
Microversions since Nova API v2.1 are aimed to replace the v3 work. The
/v2.1 is backwards compatible with the legacy /v2 endpoint. What we
called in the past /v3 is now something defunct in-tree. The /v2.1 API
is based on the v3 work, but there are many things that differ, in
particular with the backwards-compat thing. We keep the /v2 path in
api-paste.ini for making sure an upgrade doesn't trample operators and
users but if you look in tree, that's redirecting to the v2.1
codepath (just not asking for microversions). In summary, we only need
one endpoint, ie. /v2.1.
Additional information at https://bugzilla.redhat.com/show_bug.cgi?id=1291291
Related-Bug: #1564372
Change-Id: I1654665663bc5a19c201f7d25407910654ac1308
Depends-On: I6d64b8bcd0f79f1f298ddc809e6d92fbc2985c45
|
|
This patch wires in a new for Mitaka Heat feature
that allows us to dynamically include a set of nested
stacks representing individual services via a Heat resource chain.
Follow on patches will use this interface to decompose the controller
role into isolated services.
Co-Authored-By: Steve Hardy <shardy@redhat.com>
Depends-On: If510abe260ea7852dfe2d1f7f92b529979483068
Change-Id: I84c97a76159704c2d6c963bc4b26e365764b1366
|
|
The endpoint map contains, not only the hosts and protocols that
the resulting services will use, but also the ports. This
information is useful, and the aim of this patch is to make it
available for the tripleoclient to use it.
Change-Id: I4cc5bbf2e7200f78cd90b93659c326a9200278d7
|
|
Atomic is set to Docker 1.8.2. We no longer need to pull the
latest Docker to make our template work.
Change-Id: I8ab4e135ed4891763f8ced596116b14101466160
Co-Authored-By: Ian Main <imain@redhat.com>
|
|
In order to use cinder, we need to be able to use
/dev/pts/ptmx. Centos sets this to 000 when on Fedora
it's 666.
Change-Id: I76dc5adc64d2da0d27204ea31175244bc1b94428
|
|
The generated galera config has to include additional settings for
galera to be active on MariaDB 10.1.
wsrep_on must be explicitely set to ON. On MariaDB 5.5, this was
implicitely set as soon as wsrep_provider was specified.
a valid wsrep_cluster_address must be configured in addition to
wsrep_on, otherwise recovery command mysqld_safe --wsrep-recover
cannot retrieve replication state, and cluster cannot be bootstrapped.
These explicit settings are backward compatible with MariaDB 5.5 since
the two variables exist in both versions of MariaDB.
Change-Id: I4ab4f4eeb8679899f194399ba8695155e9a2f4a5
Closes-Bug: 1563751
|
|
|
|
|
|
The single ping method in the validation script is causing for
deployments to fail. When reviewing the network connectivity, we are
finding we actually do have connectivity
( https://gist.github.com/jtaleric/0276a117625e44993be0 ). This patch is
to change the ping count from 1 to 10, to ensure the network is up.
Closes-Bug: 1563521
Change-Id: I9772407554dffa91978a49a16490ef9ed448a054
|
|
Some options in neutron.conf are used bu OVS agent, like logging &
messaging.
During the upgrade process, you need to restart the agent if these
options change.
We could patch puppet-neutron to add a notify, but the community won't
like it because Neutron OVS agent is not able to restart gracefully
until [1] got merged. Until that, we can fix it in TripleO, where we
suppose Puppet runs happenning during bootstraps and upgrades.
Later, we'll drop this code from here and move it in puppet-neutron.
[1] https://review.openstack.org/#/c/297211
Change-Id: I02b17b66e93331ddfb1a7abd8adff672bc7a32d6
Closes-Bug: #1563437
|
|
|
|
Change-Id: I60ab36b04b8932e4dbee58e21998dc984178b41c
Bugzilla: https://bugzilla.redhat.com/1275281
|
|
|
|
|