Age | Commit message (Collapse) | Author | Files | Lines |
|
It doesn't exist in the non containerized openstack so leave it
stubbed out by default.
Closes-Bug: #1721212
Change-Id: I5fcb1f0b9958ac90f034a12f1ee733dae6571f9c
(cherry picked from commit a850d8059fbc1c36efb18773e40bb600e5da5005)
|
|
The Networker role should not have the api services run on it. Instead
these services should run as part of the ControllerOpenstack role that
should be used with this role.
Change-Id: Iabfe276fe700843f3a8da0b9e9220b2f82e20ec9
Closes-Bug: #1718299
(cherry picked from commit 964a5d738b8dbb6beb077d76448c6f3a84be2500)
|
|
The clustercheck service is currently in the ControllerOpenstack role
which represents a controller without the DB. Since the clustercheck
service/container always talks to the SQL server via a localhost
connection it *has* to run on the very same node that hosts the DB.
In a containerized deployment this error shows up with db syncs simply
hanging because haproxy will stop serving port 3306 because the
clustercheck service on port 9200 cannot talk to mysql locally.
Errors like this will be logged when trying to connect to the DB VIP:
mysql -u heat -h 172.17.1.13 -p3UazsaeTC64V9UvEcJ3GZ9rbd
ERROR 2013 (HY000): Lost connection to MySQL server at 'reading initial communication packet', system error: 0
Fix this by making sure that the clustercheck service runs on
the DB role.
Change-Id: Iec4c9678d8b8d44e002c1e53110dedc0674359fb
Closes-Bug: #1715847
(cherry picked from commit 1760079dfe5905f2e696b9fc5c729cffa44554ae)
|
|
This change adds support for manila::backend::dellemc_isilon
Change-Id: I92592e4b717d4b1812ccd810ec1daaedd181c3dd
Implements: blueprint dellemc-isilon-manila
(cherry picked from commit f6c9906d51fb3268b7a7d61d53181ab5d3c0d2ec)
|
|
This change adds support for manila::backend::dellemc_vmax
Change-Id: I92e189c8741c496ef6c27130f73829c327a99f1b
Implements: blueprint dellemc-vmax-manila
(cherry picked from commit 04daabdc8414e4435dc4cd3ccfea9a62b5631261)
|
|
This change adds support for manila::backend::dellemc_vnx
Change-Id: I5fa5c2d6956429d1b9c12a5af6d4a887ed0624d9
Implements: blueprint dellemc-vnx-manila
(cherry picked from commit a3debcfa8b2cbb3acaba292e082b0a3b0ee8ef54)
|
|
This change adds support for manila::backend::dellemc_unity
Change-Id: Idec67d190b12359e8e6f1c157577088fa84ef41d
Implements: blueprint dellemc-unity-manila
(cherry picked from commit c5ee7b7714c712807f33ca1645186d33103a2264)
|
|
|
|
Add a docker service template to provide containerized services
logs rotation with a crond job.
Add OS::TripleO::Services::LogrotateCrond to CI multinode-containers
and to all environments among with generic services like Ntp or Kernel.
Set it to OS::Heat::None for non containerized environments and
only enable it to the environments/docker.yaml.
Closes-bug: #1700912
Change-Id: Ic94373f0a0758e9959e1f896481780674437147d
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
|
|
This is needed for TLS everywhere, else the certs won't be requested.
Change-Id: I9849e009843683a75fefa6e9f4b8213bcff3a889
Closes-Bug: #1711424
|
|
Ceilometer api and collector are disabled in pike. During upgrade case,
if its not in the roles_data the disable task doesnt get picked
up and continue to run. This should be removed in Queen cycle.
Change-Id: I3bf555ac9488fc6622e6a62a809150082a85ea54
|
|
Presently the ovn-controller service (puppet/services/neutron-compute-plugin-ovn.yaml)
is started only on compute nodes. But for the cases where the controller nodes
provide the north/south traffic, we need ovn-controller service runninng in controller
nodes as well.
This patch
- Renames the neutron-compute-plugin-ovn.yaml to ovn-controller.yaml which makes more
sense and sets the service name as 'ovn-controller'.
- Adds the service 'ovn-controller' to Controller and Compute roles.
- Adds the missing 'upgrade_tasks' section in ovn-dbs.yaml and ovn-controller.yaml
Depends-On: Ie3f09dc70a582f3d14de093043e232820f837bc3
Depends-On: Ide11569d81f5f28bafccc168b624be505174fc53
Change-Id: Ib7747406213d18fd65b86820c1f86ee7c39f7cf5
|
|
Allow the user to set a specific Tuned profile on a given host.
Defaults to throughput-performance
Change-Id: I0c66193d2733b7a82ad44b1cd0d2187dd732065a
|
|
This currently assumes nova-compute and iscsid run in the same context which
isn't true for a containerized deployment
Change-Id: I11232fc412adcc18087928c281ba82546388376e
Depends-On: I91f1ce7625c351745dbadd84b565d55598ea5b59
Depends-On: I0cbb1081ad00b2202c9d913e0e1759c2b95612a5
|
|
This adds a docker-ha.yaml that can be passed to the deployment
environments in order to get a containerized HA deployment.
Until we make the containerized deplyment the default the operator
must first include docker.yaml and *then* docker-ha.yaml in order
to get a containerized overcloud with an HA control plane.
We also make sure that the ClusterCheck service is set to None
by default and is part of the Controller roles.
Change-Id: I13204d70aad8dfeaf2bcf2ae30a1bb4715167659
|
|
Currently there's some hard-coded references to roles here, rendering
from the roles_data.yaml is a step towards making the use of isolated
networks for custom roles easier.
Partial-Bug: #1633090
Depends-On: Ib681729cc2728ca4b0486c14166b6b702edfcaab
Change-Id: If3989f24f077738845d2edbee405bd9198e7b7db
|
|
As we create new standard roles, we should include them from a single
location for ease of use and to reduce the duplication of the role
definitions elsewhere. This change adds a roles folder to the THT that
can be used with the new roles commands in python-tripleoclient by the
end user to generate a roles_data.yaml from a standard set of roles.
Depends-On: I326bae5bdee088e03aa89128d253612ef89e5c0c
Change-Id: Iad3e9b215c6f21ba761c8360bb7ed531e34520e6
Related-Blueprint: example-custom-role-environments
|