Age | Commit message (Collapse) | Author | Files | Lines |
|
Add the option to add the section of ml2_odl
to ml2_conf.ini when opendaylight_v2 mechanism driver is used
Change-Id: I2a1c5097614e47cc09e43bbc77305a0548d54baa
|
|
|
|
Configure Nova with new Oslo Messaging parameters for RabbitMQ.
Note: parameters are renamed to be standard, so it will help a future
transition to another backend in TripleO.
Change-Id: Ia67a4dbe5b2bd12c45308a5581f96d0457b8e018
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
This closes CVE-2016-9599.
1) Sanitize dynamic HAproxy endpoints firewall rules
Build the hash of firewall rules only when a port is specified. The
HAproxy endpoints are using TCP protocol, which means we have to specify
a port to the IPtables rules.
Some services don't have public network exposure (e.g. Glance Registry),
which means they don't need haproxy_ssl rule.
The code prepare the hash depending on the service_port and
public_ssl_port parameters and create the actual firewall rules only if
one of those or both parameters are specified.
It will prevent new services without public exposure to open all traffic
because no port is specified.
2) Secure Firewall rules creations
The code won't allow to create TCP / UDP IPtables rules in INPUT
or OUTPUT chains without port or sport or dport, because doing it would
allow an IPtables rule opening all traffic for TCP or UDP.
If we try to do that, Puppet catalog will fail with an error explaining
why.
Example of use-cases:
- creating VRRP rules wouldn't require port parameters.
- creating TCP or UDP rules would require port parameters.
3) Allow to open all traffic for TCO / UDP (when desired)
Some use-cases require to open all traffic for all ports on TCP / UDP.
It will be possible if the user gives port = 'all' when creating the
firewall rule.
Backward compatibility:
- if our users created custom TCP / UDP firewall rules without port
parameters, it won't work anymore, for security purpose.
- if you users want to open TCP / UDP for all ports, they need to pass
port = 'all' and the rule will be created, though a warning will be
displayed because this is insecure.
- if our users created custom VRRP rules without port parameters, it
will still work correctly and rules will be created.
- TCP / UDP rules in FORWARD chain without port are still accepted.
Change-Id: I19396c8ab06b91fee3253cdfcb834482f4040a59
Closes-Bug: #1651831
|
|
I'm seeing this run yum update on deploy, even though hiera
tripleo::packages::enable_upgrade says false.
I assume these are needed because we're getting "false", but I'm
unclear if this is a recently introduced problem (I only noticed it
today as my image has outdated centos packages and it thus hung on
step2 of the deploy.
Change-Id: If09cdde9883f2674dbbc40944be5fe4445caa08e
Closes-Bug: #1652107
|
|
|
|
|
|
|
|
Removes including Neutron services in the OpenDaylight API service.
This was breaking custom roles when splitting up OpenDaylight and Neutron
into different roles. Also, references to "controller" are removed
because with custom roles OpenDaylight could be isolated to any role
type. The 'bootstrap_nodeid' still works with any role, and only
resolves to the first node in the series of nodes for that role type.
Partial-Bug: 1651499
Change-Id: I418643810ee6b8a2c17a4754c83453140ebe39c7
Signed-off-by: Tim Rozet <trozet@redhat.com>
|
|
some services need a terminating proxy to do TLS on their main
interfaces, to address this, we use httpd's mod_proxy and make it listen
in front of these services with an appropriate certificate.
bp tls-via-certmonger
Change-Id: I82243fd3acfe4f23aab373116b78e1daf9d08467
|
|
|
|
|
|
|
|
|
|
This is the glue between the collectd composable service in
tripleo-heat-templates and the puppet-collectd module.
Change-Id: I7e899e3af870b04dcd45503bd322278997fa53d0
|
|
Enable ml2.pp to call networking-fujitsu manifest in puppet-neutron.
The implementation in puppet-neutron is in progress separately.
Change-Id: I5eb2c2a9c50b5991d62f4b6d74b83351c86b02de
Implements: blueprint integration-networking-fujitsu
Depends-On: I37a502b43eb7d91bfe20625248ed117eae3ca535
|
|
This patch updates the swift proxy so that it only depends
on ceilometer if the ceilometer_api_enabled all-nodes-data hiera
setting has been set.
Also removes a parameter dependency where the
tripleo::profile::base::swift::proxy class was referencing
a puppet-ceilometer value from hiera (which can
also cause ceilometer dependencies).
Depends-On: Ief5399d7ea4d26e96ce54903a69d660fa4fe3ce9
Change-Id: I8d9f69f5e9160543b372bd9886800f16f625fdc6
Closes-bug: #1648736
|
|
|
|
Ceilometer api is deprectaed in Ocata. Lets disable by default.
This can still be enabled by setting enable_legacy_ceilometer_api
param.
Change-Id: Iffb8c2cfed53d8b29e777c35cee44921194239e9
|
|
6.1.0 will be ocata-2.
Change-Id: Ic5c4e00135a1e876e755e6ada94abb42dd29f46a
|
|
Cinder Backend configuration support for
HPELeftHandISCSIDriver for VSA storage
Change-Id: Ia7e5f3d436283f7949b0eb8f109b3dc0309af4f5
|
|
|
|
|
|
Currently we chose the pacemaker cluster node by simply taking
hiera('controller_node_names'). We should instead use the
pacemaker_short_node_names array which is built dynamically
from all the nodes that include the pacemaker service.
Change-Id: I0a3e4acaab11e078da5eeb2ef2adde5387785927
|
|
Instead of checking for glance_registry_enabled, we should be checking
for glance_api_enabled. The glance-api v1 depends on the registry, which
means the database will be created but glance-api v2 doesn't which means
that not deploying the registry would result in the glance database not
being created. On the other hand, glance-registry is never deployed
without glance-api
Change-Id: Ief25dafb65f7a043fbb3d16f1d7ef834c9947a93
|
|
MidoNet no longer uses the API component. It has been renamed/refactored
to "cluster" as it can be seen on the docs at
https://blog.midonet.org/introducing-midonet-cluster-services/
Also there is no need to have a Cassandra and Zookeeper dedicated
classes, as we leverage this through the use of the midonet_openstack
puppet module.
Change-Id: I2f17aeeac2d1b121be0d445ff555320d5af5d270
Partial-Bug: #1647302
|
|
Change-Id: I2eb5b84dbeedde58153bceb707fd15cce8f03d5e
|
|
This change adds rspec testing for the cinder profiles with in
puppet-tripleo. Additionally while testing, it was found that the
backends may incorrectly have an extra , included in the settings
for cinder volume when running puppet 3. This change includes a fix
the cinder volume backends to make sure we are not improperly
configuring it with a trailing comma.
Change-Id: Ibdfee330413b6f9aecdf42a5508c21126fc05973
|
|
|
|
|
|
ReNo [1] is the release management tool in OpenStack.
This patch adds the basic structure to start using it for doc
builds in puppet-tripleo.
* Update .gitignore
* Add a basic note "use-reno"
* Add releasenotes/ dir and basic files
* Add python files: setup.cfg, setup.py, test-requirements.txt and
tox.ini.
[1] http://docs.openstack.org/developer/reno
Change-Id: Idc9a30ab632c8e2ca794fb10431cdefd5d861d14
|
|
Change-Id: I729702a5326d74ad35485fa7276af45e2223ec5f
|
|
|
|
|
|
|
|
|
|
This only takes effect is internal-tls is used, and forces haproxy to
do proper verifications of the SSL certificates provided by the
servers.
bp tls-via-certmonger
Change-Id: Ibd98ec46dd6570887db29f55fe183deb1c9dc642
|
|
Puppet 4 ordering make things more strict in catalog, which is good.
Resources have to be orchestrated or Puppet will take them in the order
they are found in catalog.
This patch makes sure we create MySQL users only when Galera is actually
ready.
Closes-Bug: #1645787
Change-Id: I536a1a128c3a7eca49bcc4f34a1307bcd60b029e
|
|
|
|
|
|
This sets the cluster_nodes configuration in RabbitMQ to use FQDNs
instead of IP addresses. Note that in HA, RabbitMQ is already
configured using FQDNs.
Change-Id: I2b1cec25ff25f4afd72a28246c2cda9c58d7b61e
|
|
This replaces the services' IP-based RabbitMQ configuration and uses
FQDNs instead.
Change-Id: I2be81aecacf50839a029533247981f5edf59cb7f
|
|
|