Age | Commit message (Collapse) | Author | Files | Lines |
|
Provides a simple mechanism to verify the correct certificates
landed.
A quick and simple way to verify SSL certificates were generated for
a given key is by comparing the modulus of the two. By outputing
the key modulus and certificate modulus we offer a way to verify
that the right cert and key have been deployed without compromising
any of the secrets.
Change-Id: I882c9840719a09795ba8057a19b0b3985e036c3c
|
|
This commit enables the injection of a trust anchor or root
certificate into every node in the overcloud. This is in case that the
TLS certificates for the controllers are signed with a self-signed CA
or if the deployer would like to inject a relevant root certificate
for other purposes. In this case the other nodes might need to have
the root certificate in their trust chain in order to do proper
validation
Change-Id: Ia45180fe0bb979cf12d19f039dbfd22e26fb4856
|
|
This is a first implementation of adding TLS termination to the load
balancer in the controllers. The implementation was made so that the
appropriate certificate/private key in PEM format is copied to the
appropriate controller(s) via a software deployment resource.
And the path is then referenced on the HAProxy configuration, but this
part was left commented out because we need to be able to configure the
keystone endpoints in order for this to work properly.
Change-Id: I0ba8e38d75a0c628d8132a66dc25a30fc5183c79
|
|
|
|
|
|
We don't necessarily want the network configuration to be reapplied
with every template update so we add a param to configure on which
action the NetworkDeployment resource should be executed.
Change-Id: I0e86318eb5521e540cc567ce9d77e1060086d48b
Co-Authored-By: Dan Sneddon <dsneddon@redhat.com>
Co-Authored-By: James Slagle <jslagle@redhat.com>
Co-Authored-By: Jiri Stransky <jstransk@redhat.com>
Co-Authored-By: Steven Hardy <shardy@redhat.com>
|
|
|
|
|
|
|
|
Results from pmap of idle nova-compute:
https://gist.github.com/jtaleric/addd9079d6cdf4f7cf42
Results from free -m and cat /proc/meminfo:
https://gist.github.com/jtaleric/410130f09c2aad2dc7e9
bug: https://bugzilla.redhat.com/show_bug.cgi?id=1282644
Change-Id: I9b3ceecabfdae0a516cfc72886fde7b26cc68f82
|
|
|
|
|
|
|
|
|
|
* Add NovaApiVirtualIP string parameter.
* Compute nova_url and nova_admin_auth_url parameters.
* Configure in Hiera neutron::server::notifications::* parameters.
* non-ha: include ::neutron::server::notifications
* ha: include ::neutron::server::notifications and create orchestration
* Set vif_plugging_is_fatal to True so we actually fail if Neutron is not
able to create the VIF during Nova server creation workflow.
Depends-On: I21dc10396e92906eab4651c318aa2ee62a8e03c7
Change-Id: I02e41f87404e0030d488476680af2f6d45af94ff
|
|
* Use the parameter in Puppet configuration (Hiera) to configure neutron
BZ-1273303
Change-Id: Ic5a7a1f13fd2bc800cadc3a78b1daadbc0394787
Signed-off-by: Cyril Lopez <cylopez@redhat.com>
|
|
When the cluster is brought back online after a yum update in
yum_update.sh, we should verify that galera is fully sync'd before
moving on. This ensures the sync is complete before moving on to update
any other nodes in the cluster.
Change-Id: Ie8fc2c5d5214deacea94ca658ac75359b318ced1
|
|
|
|
|
|
Create a bridge for the overcloud services using linux bridge instead of
openvswitch. Some SDNs may be incompatible with openvswitch datapath.
Change-Id: I873368e74ddfd95bf5c6e1f88cec33ba011e09dd
|
|
This change adds support for enabling/disabling L2 population in
Neutron agents. It currently defaults to false.
Change-Id: I3dd19feb4acb1046bc560b35e5a7a111364ea0d7
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Pass the ceph::pool properties as arguments to the class call
instead of setting them as class defaults.
Ceph recommends max 32 PGs and min 4 PGs per OSD so this change
also lowers the defaults to 32 which works with 1 OSD, suits well
a scenario with 3 OSDs and is easy to customize in the static
hiera if more than 8 OSDs are deployed.
More info at: https://bugzilla.redhat.com/show_bug.cgi?id=1252546
Change-Id: Ifed11d1857900b2251dfdf69d6b6f168150e6330
|
|
|
|
|
|
When I deploy director with NFS backend for cinder,
sometimes I don't need nfs mount options.
If I choose to omit this option, or if the option
is defined to '', the deployment fails.
This patch add just a default value for this option.
Change-Id: Idf708aaecebd5c6db14f48ad2a53d6c2453be5ee
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1281870
|
|
This matches change I6fc18f1ad876c5a25723710a3b20d8ec9519dcba, but we
need it to set it before attempting the cluster stop - yum update -
cluster start cycle, to make sure this cycle doesn't hit the low timeout
limits.
This can be removed once updates from deployments made prior to
I6fc18f1ad876c5a25723710a3b20d8ec9519dcba are no longer supported.
Change-Id: I587136d8d045d213875c657ea5a405074f80c8ad
|
|
This bumps further up the stop/start timeout for the pcmk/systemd
services so that it matches the 100s default set in future pcmk
versions [1].
1. https://github.com/ClusterLabs/pacemaker/commit/17d65e9f44061a4fa14a9cddd6edc403b2d6d2b3
Change-Id: I6fc18f1ad876c5a25723710a3b20d8ec9519dcba
|
|
|
|
|
|
|
|
We've heard from end users that it is confusing that puppet
isn't re-executed on a heat stack-update.
This patch adds a new DeployIdentifier parameter which
we can set via client tooling (tripleoclient) to a unique
value so that on each heat stack-update we always execute
all of our configuration deployments.
Change-Id: Ic352ddd30807dc378e5e7b6c396bc53f5d6d5622
Related-bug: #1505430
|
|
The default balance-tcp is causing issues with deployments.
Defaulting to active-backup.
After ~ 100 guests (total) connectivity to each guest would become spotty
(simple pings would fail, then become successful.) In /var/log/messages
we saw :
"overcloud-controller-1 kernel: openvswitch: ovs-system: deferred action
limit reached, drop recirc action"
For more details, refer to this link:
http://openvswitch.org/pipermail/discuss/2015-October/019168.html
Change-Id: Ia0f2592a289e13472b98d97057cd516c5048fe59
|
|
Some missing pacemaker constraints were added in the following commits:
https://review.openstack.org/#/c/219770/
https://review.openstack.org/#/c/219665/
https://review.openstack.org/#/c/218931/
https://review.openstack.org/#/c/218930/
Overclouds that were deployed prior to these constraints being added to
tripleo-heat-templates still have the constraints missing. During an
update, stopping and starting the cluster can fail without these
constraints in place. As a workaround, conditionally add these
contraints in yum_update.sh so that we're sure they're always present
before updating.
Change-Id: Id46c85dbbe5e85d362279661091b17ce1b697fe0
|
|
|
|
|
|
We need to pass details of the Glance Registry and public Horizon
endpoints to the load balancers so add them to the EndpointMap
Change-Id: Ia6261223e7701734f47ce48471c86f690ba3dcd5
|
|
We expose all of the other parameters, so expose the IP too for
consistency
Change-Id: I5c31befde51e398318c7b8c744310212288ad892
|
|
CloudName is the DNS name for the public VIP this means we will likely
want it available for use in the endpoint hostnames, rather than people
needing to copy and paste the same hostname
Change-Id: Ic6d708b083244442195eee890de91bbc7e133ec2
|
|
Because many of the service endpoints URLs use the same patterns for
generating the URLs it makes sense to use the same templates to reduce
the copy and paste.
In the process also adds support for explicitly specifying hostnames
for use in the endpoints. Note: DNS must be pre-configured. The
Heat templates do not directly configure DNS.
Change-Id: Ie3270909beca3d63f2d7e4bcb04c559380ddc54d
Co-Authored-By: Juan Antonio Osorio Robles <jaosorior@redhat.com>
|
|
|