Age | Commit message (Collapse) | Author | Files | Lines |
|
When we merged If3989f24f077738845d2edbee405bd9198e7b7db we correctly
used name_lower for most things but we left out the the
OS::TripleO::Network resource which would cause errors like the
following:
Could not fetch contents for file:///tmp/tripleoclient-LdqQGJ/tripleo-heat-templates/network/internalapi.yaml
The reason is that the network filename is called internal_api.yaml.
Change-Id: I40f268668ed948e5d41ed0ff5a8fc954cef7b17c
Closes-Bug: #1697883
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Currently there's some hard-coded references to roles here, rendering
from the roles_data.yaml is a step towards making the use of isolated
networks for custom roles easier.
Partial-Bug: #1633090
Depends-On: Ib681729cc2728ca4b0486c14166b6b702edfcaab
Change-Id: If3989f24f077738845d2edbee405bd9198e7b7db
|
|
In change I90253412a5e2cd8e56e74cce3548064c06d022b1 we merged
containerized HAProxy setup, but because of a typo in resource
registry, CI kept using the non-containerized variant and it went
unnoticed that the containerized HAProxy doesn't work yet.
We merged a resource registry fix in
Ibcbacff16c3561b75e29b48270d60b60c1eb1083 and it brought down the CI,
which now used the non-working HAProxy.
After putting in the missing haproxy container image to tripleo-common
in I41c1064bbf5f26c8819de6d241dd0903add1bbaa we got further, but the
CI still fails on HAProxy related problem, so we should revert back to
using non-containerized HAProxy for the time being.
Change-Id: If73bf28288de10812f430619115814494618860f
Closes-Bug: #1697645
|
|
|
|
As noted in the original patch review
I5e743f789ab7dd731bc7ad26226a92a4e71f95a1 the IronicInspectorAdmin
should be https.
Change-Id: I6e37427da679775f02ff0c5fe55cfee51c122e3d
|
|
|
|
Depends-On: I9abe867dfbdc81d14a1b3b3f1529240b5e522be5
Co-Authored-By: Martin André <m.andre@redhat.com>
Co-Authored-By: Ian Main <imain@redhat.com>
Co-Authored-By: Luigi Toscano <ltoscano@redhat.com>
Co-Authored-By: Telles Nobrega <tenobreg@redhat.com>
Change-Id: Id8e3b7e86fa05e0e71cc33414ceae78bab4e29b2
Closes-bug: #1668927
|
|
Co-Authored-By: Jon Bernard <jobernar@redhat.com>
Depends-On: I486de8b6ab2f4235bb4a21c3650f6b9e52a83b80
Change-Id: I6cf70fa05ad1c8aa6d9f837ddcd370eb26e45f97
|
|
This configures iscsid so that it runs as a container on
relevant roles (undercloud, controller, compute, and volume).
When the iscsid docker service is provision it will also run
an ansible snippet that disables the iscsid.socket on the host
OS thus disabling the hosts systemd from auto-starting iscsid
as it normally does.
Co-Authored-By: Jon Bernard <jobernar@redhat.com>
Change-Id: I2ea741ad978f166e199d47ed1b52369e9b031f1f
|
|
|
|
|
|
|
|
HorizonSecureCookies is incompatible with non-ssl deployments, which
is our default deployment method. When SSL is in use, it can be
turned on in the enable-tls.yaml file. This does mean that
existing users won't automatically get this feature turned on as
part of their upgrade because enable-tls.yaml is an environment that
is intended to be copied and edited, but it's simple to add the
parameter to the file for users who want that behavior after they
upgrade to a version where it is available.
Change-Id: If83d3d8709fc4e0c09569e8bf524721d332bf560
Closes-Bug: 1696861
|
|
|
|
Change-Id: I05126a108f5ab790e729d1f98399dca5801ebd69
|
|
It is 'HAproxy' and not 'HAProxy'. This needs fixing so that the
proper service is instantiated when a role includes the HAproxy
service.
Change-Id: Ibcbacff16c3561b75e29b48270d60b60c1eb1083
|
|
Implements: blueprint container-healthchecks
Depends-On: I9ccf1c4c948e6e347eb8e4d947edf77822a601cb
Change-Id: Iff7758623974a69e2c043cf611f46ce11c36cc59
|
|
Closes-bug: #1668935
Change-Id: I83a02735eb445e831bc74ec786f2bb42cd2f87d6
|
|
Closes-bug: #1668929
Change-Id: I051edcf2980bb9c2521e21c410055690c012a0d1
|
|
Co-Authored-By: Martin André <m.andre@redhat.com>
Partial-Bug: #1668922
Change-Id: I0c98f26b19caf755bbc80bd6a75fc17b5d191ae4
|
|
|
|
|
|
This change implements an initial container for haproxy in the non-HA
case (aka when the container is not spawn by pacemaker).
We tested this using a stock kolla haproxy container image and we were
able to get haproxy running on a container with net=host correctly.
Change-Id: I90253412a5e2cd8e56e74cce3548064c06d022b1
Co-Authored-By: Michele Baldessari <michele@acksyn.org>
Depends-on: I51c482b70731f15fee4025bbce14e46a49a49938
Closes-Bug: #1668936
|
|
|
|
|
|
|
|
|
|
Ie9bdd9b16bcb9f11107ece614b010e87d3ae98a9 improperly used
CephPoolDefaultSite for the puppet-ceph-devel environment. The correct
configuration item is CephPoolDefaultSize.
Change-Id: If3a23f8d000061da62e4a7565a7fb6cf1ac97a4a
|
|
environment file."
|
|
This adds the sshd puppet service to the containerized compute role
All other roles already include this service from the defaults roles data,
it is only missing from the compute role.
As the sshd service runs on the docker host, this must remain as a
traditional puppet service. NB the sshd puppet service does not enable
sshd, it just enables the management of the sshd config via t-h-t/puppet.
Closes-bug: #1693837
Change-Id: I86ff749245ac791e870528ad4b410f3c1fd812e0
|
|
|
|
This will be used in our HA OVB CI job where we currently are
failing due to running out of memory. Telemetry will still be
tested via scenarios, but this will free up a large chunk of
memory in the most memory intensive job.
Closes-Bug: 1693174
Change-Id: Idefe9f0de47c5b0f29b7326642d697ed179e2eb8
|
|
Add missing optional services for docker, if present in
non-docker optional services, and vice versa.
Fix issues with non containerized Mongo resources are
missing when deploying optional containerized zaqar service.
Add non containerized Ironix-Pxe resources to the optional
Ironic services, as it is done for the containerized Ironic.
Change-Id: I56675e015fa4bbd6d9809dbf7c21453939321410
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
|
|
Currently TripleO does not support LinuxBridge driver, setting
NeutronMechanismDrivers to linuxbridge will not force ml2 plugin
to use linuxbridge.
This commit adds new environment file which replaces default ovs
agent with linuxbridge on Compute and Controller nodes.
Change-Id: I433b60a551c1eeb9d956df4d0ffb6eeffe980071
Closes-Bug: #1652211
Depends-On: Iae87dc7811bc28fe86db0c422c363eaed5e5285b
Depends-On: Ie3ac03052f341c26735b423701e1decf7233d935
|
|
right thing by default"
|
|
Service_provider is configured to point to networking-odl
Change-Id: Icdb1c1414b237a9409e8e7dc55bb3c01da41841c
Signed-off-by: Ricardo Noriega <rnoriega@redhat.com>
|
|
|
|
by default
The default value is 0 which has the minimum number be caluclated based on the replica count
from osd_pool_defaut_size. The default replica count is 3 and the calculated min_size is 2.
If the replica count is 1 then the min_size is 1. ie: min_size = replica - (replica/2)
Add CephPoolDefaultSize parameter to ceph-mon.yaml. This parameter defaults to 3 but can
be overriden. See puppet-ceph-devel.yaml for an example
Change-Id: Ie9bdd9b16bcb9f11107ece614b010e87d3ae98a9
|
|
Agent service is disabled and service_provider is configured
to point to networking-odl
Change-Id: I570d15a092cff66666a74e95dee69f6531a58b22
Signed-off-by: Ricardo Noriega <rnoriega@redhat.com>
|
|
It's not used by any service that we enable by default. So instead, I
added it to the environment that enables the services that use it.
Change-Id: Id2e6550fb7c319fc52469644ea022cf35757e0ce
|
|
From ocata, the vhost socket directory requires a different set of
permissions from the default directory (/var/run/openvswitch). Modifying
the directory to a new agreed directory which will be created in puppet.
Closes-Bug: #1687993
Depends-On: I255f98c40869e7508ed01a03a96294284ecdc6a8
Change-Id: I77250ca84c9da2fb5a8381e6f60234f8a05cbf12
|
|
file.
During a deployment on lower spec systems, the "db sync" can take longer than five minutes.
The solution is to increase the default value of DatabaseSyncTimeout from 300 to 900
by using the environment file "low-memory-usage.yaml".
Change-Id: I6463dbdd4dfe1d6f2dd283211cc496fe3a628fb0
Closes-bug: #1689318
|
|
|