Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
Configure Nova with new Oslo Messaging parameters for RabbitMQ.
Note: parameters are renamed to be standard, so it will help a future
transition to another backend in TripleO.
Change-Id: Ia67a4dbe5b2bd12c45308a5581f96d0457b8e018
|
|
|
|
We need to run the basic cell v2 setup for nova as it is required for
Ocata.
Change-Id: I693239ff5026f58a65eb6278b1a8fcb97af4f561
Depends-On: I43ba77cd4c8da7c6dc117ab0bd53e5cd330dc3de
Depends-On: I9462ef16fd64a577c3f950bd121f0bd28670fabc
Closes-Bug: #1649341
|
|
|
|
|
|
|
|
|
|
|
|
|
|
This closes CVE-2016-9599.
1) Sanitize dynamic HAproxy endpoints firewall rules
Build the hash of firewall rules only when a port is specified. The
HAproxy endpoints are using TCP protocol, which means we have to specify
a port to the IPtables rules.
Some services don't have public network exposure (e.g. Glance Registry),
which means they don't need haproxy_ssl rule.
The code prepare the hash depending on the service_port and
public_ssl_port parameters and create the actual firewall rules only if
one of those or both parameters are specified.
It will prevent new services without public exposure to open all traffic
because no port is specified.
2) Secure Firewall rules creations
The code won't allow to create TCP / UDP IPtables rules in INPUT
or OUTPUT chains without port or sport or dport, because doing it would
allow an IPtables rule opening all traffic for TCP or UDP.
If we try to do that, Puppet catalog will fail with an error explaining
why.
Example of use-cases:
- creating VRRP rules wouldn't require port parameters.
- creating TCP or UDP rules would require port parameters.
3) Allow to open all traffic for TCO / UDP (when desired)
Some use-cases require to open all traffic for all ports on TCP / UDP.
It will be possible if the user gives port = 'all' when creating the
firewall rule.
Backward compatibility:
- if our users created custom TCP / UDP firewall rules without port
parameters, it won't work anymore, for security purpose.
- if you users want to open TCP / UDP for all ports, they need to pass
port = 'all' and the rule will be created, though a warning will be
displayed because this is insecure.
- if our users created custom VRRP rules without port parameters, it
will still work correctly and rules will be created.
- TCP / UDP rules in FORWARD chain without port are still accepted.
Change-Id: I19396c8ab06b91fee3253cdfcb834482f4040a59
Closes-Bug: #1651831
|
|
I'm seeing this run yum update on deploy, even though hiera
tripleo::packages::enable_upgrade says false.
I assume these are needed because we're getting "false", but I'm
unclear if this is a recently introduced problem (I only noticed it
today as my image has outdated centos packages and it thus hung on
step2 of the deploy.
Change-Id: If09cdde9883f2674dbbc40944be5fe4445caa08e
Closes-Bug: #1652107
|
|
Enable ml2.pp to call networking-fujitsu manifest in puppet-neutron
for fossw ML2 plugin setting.
Change-Id: I044c5812bbc5cd3de4bc33556cffbe5bad8e64cf
Implements: blueprint integration-fossw-networking-fujitsu
Depends-On: I79df6b6a27d95f0c0e2c87207ab80235a4efccfc
|
|
|
|
|
|
|
|
A puppet manifest to allow the toggle of 'Banner' in sshd_config
and enable population of an SSH login banner needed for security
compliance such as DISA STIG
If `Bannertext` is set as a parameter, the `Banner` key within
sshd_config is toggled to `/etc/issue` and the content is copied
into the `/etc/issue` file
Change-Id: Ie9f8afdfa9930428f06c9669fedb460dc1064d5e
Closes-Bug: #1640306
|
|
Removes including Neutron services in the OpenDaylight API service.
This was breaking custom roles when splitting up OpenDaylight and Neutron
into different roles. Also, references to "controller" are removed
because with custom roles OpenDaylight could be isolated to any role
type. The 'bootstrap_nodeid' still works with any role, and only
resolves to the first node in the series of nodes for that role type.
Partial-Bug: 1651499
Change-Id: I418643810ee6b8a2c17a4754c83453140ebe39c7
Signed-off-by: Tim Rozet <trozet@redhat.com>
|
|
The Swift base class is required if the SwiftStorageRole
is used without the SwiftRingBuilder role.
Change-Id: I496b65dc53c03c0711d21f98627cc21be653ca5d
|
|
some services need a terminating proxy to do TLS on their main
interfaces, to address this, we use httpd's mod_proxy and make it listen
in front of these services with an appropriate certificate.
bp tls-via-certmonger
Change-Id: I82243fd3acfe4f23aab373116b78e1daf9d08467
|
|
|
|
|
|
|
|
|
|
This is useful to customize the libvirt/qemu.conf limits when deploying
a large overcloud or one with many Ceph OSDs.
Change-Id: I258afd3ee6633e4b2ebc45aa8611be652476be0c
Depends-On: I5fa423a4b212d14f6e9ff6a270931b569558b54e
|
|
This is the glue between the collectd composable service in
tripleo-heat-templates and the puppet-collectd module.
Change-Id: I7e899e3af870b04dcd45503bd322278997fa53d0
|
|
Enable ml2.pp to call networking-fujitsu manifest in puppet-neutron.
The implementation in puppet-neutron is in progress separately.
Change-Id: I5eb2c2a9c50b5991d62f4b6d74b83351c86b02de
Implements: blueprint integration-networking-fujitsu
Depends-On: I37a502b43eb7d91bfe20625248ed117eae3ca535
|
|
This patch updates the swift proxy so that it only depends
on ceilometer if the ceilometer_api_enabled all-nodes-data hiera
setting has been set.
Also removes a parameter dependency where the
tripleo::profile::base::swift::proxy class was referencing
a puppet-ceilometer value from hiera (which can
also cause ceilometer dependencies).
Depends-On: Ief5399d7ea4d26e96ce54903a69d660fa4fe3ce9
Change-Id: I8d9f69f5e9160543b372bd9886800f16f625fdc6
Closes-bug: #1648736
|
|
|
|
Ceilometer api is deprectaed in Ocata. Lets disable by default.
This can still be enabled by setting enable_legacy_ceilometer_api
param.
Change-Id: Iffb8c2cfed53d8b29e777c35cee44921194239e9
|
|
6.1.0 will be ocata-2.
Change-Id: Ic5c4e00135a1e876e755e6ada94abb42dd29f46a
|
|
Cinder Backend configuration support for
HPELeftHandISCSIDriver for VSA storage
Change-Id: Ia7e5f3d436283f7949b0eb8f109b3dc0309af4f5
|
|
|
|
|
|
Currently we chose the pacemaker cluster node by simply taking
hiera('controller_node_names'). We should instead use the
pacemaker_short_node_names array which is built dynamically
from all the nodes that include the pacemaker service.
Change-Id: I0a3e4acaab11e078da5eeb2ef2adde5387785927
|
|
Instead of checking for glance_registry_enabled, we should be checking
for glance_api_enabled. The glance-api v1 depends on the registry, which
means the database will be created but glance-api v2 doesn't which means
that not deploying the registry would result in the glance database not
being created. On the other hand, glance-registry is never deployed
without glance-api
Change-Id: Ief25dafb65f7a043fbb3d16f1d7ef834c9947a93
|
|
MidoNet no longer uses the API component. It has been renamed/refactored
to "cluster" as it can be seen on the docs at
https://blog.midonet.org/introducing-midonet-cluster-services/
Also there is no need to have a Cassandra and Zookeeper dedicated
classes, as we leverage this through the use of the midonet_openstack
puppet module.
Change-Id: I2f17aeeac2d1b121be0d445ff555320d5af5d270
Partial-Bug: #1647302
|
|
Change-Id: I2eb5b84dbeedde58153bceb707fd15cce8f03d5e
|
|
This change adds rspec testing for the cinder profiles with in
puppet-tripleo. Additionally while testing, it was found that the
backends may incorrectly have an extra , included in the settings
for cinder volume when running puppet 3. This change includes a fix
the cinder volume backends to make sure we are not improperly
configuring it with a trailing comma.
Change-Id: Ibdfee330413b6f9aecdf42a5508c21126fc05973
|
|
|
|
|
|
ReNo [1] is the release management tool in OpenStack.
This patch adds the basic structure to start using it for doc
builds in puppet-tripleo.
* Update .gitignore
* Add a basic note "use-reno"
* Add releasenotes/ dir and basic files
* Add python files: setup.cfg, setup.py, test-requirements.txt and
tox.ini.
[1] http://docs.openstack.org/developer/reno
Change-Id: Idc9a30ab632c8e2ca794fb10431cdefd5d861d14
|
|
Change-Id: I729702a5326d74ad35485fa7276af45e2223ec5f
|
|
|
|
|
|
This makes it possible to create loopback devices for Swift by using
additional Hieradata. For example, to create a 100M loopback device that
will be mounted on /srv/node/d1:
parameter_defaults:
ControllerExtraConfig:
swift::storage::loopbacks::args: {"d1": {'seek': '100000'}}
Depends-On: I11a230b7cf08a4cc2a144d9af0e6c81bb3827348
Change-Id: I8741acc8afa1f1d23c5b25fa4bf27622f674a561
|
|
|
|
|
|
This only takes effect is internal-tls is used, and forces haproxy to
do proper verifications of the SSL certificates provided by the
servers.
bp tls-via-certmonger
Change-Id: Ibd98ec46dd6570887db29f55fe183deb1c9dc642
|
|
This defaults to 127.0.0.1:11211 without setting these explicitly. The
object-expirer is working without a correct memcache server, however it
is slower and warnings will be logged.
Related-Bug: 1627927
Depends-On: Ie139f018c4a742b014dd4d682970e154d66a8c6d
Change-Id: I89a879592a264d541cf42f007584e2e78058c867
|