Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
This commit brings change from
I3896fa2ea7caa603186f0af04f6d8382d50dd97a to
docker-services-tls-everywhere.yaml, which original commit message was:
These duplicate the defaults in puppet/services/docker.yaml and
break things if you include an environment file (e.g that generated
by quickstart containers-default-parameters.yaml) before the
docker.yaml.
Instead it's probably more helpful to include the commented lines
showing how to enable use of a local docker registry.
Change-Id: Ifa95ef60bc17bd2638ebb6aebf77a819b28c9f0b
Related-Bug: #1691524
|
|
It was removed by mistake from the docker.yaml environment file in
I76f188438bfc6449b152c2861d99738e6eb3c61b.
Change-Id: If8df98e1ddd0961ab0c9e5df917fef8200db65e6
Closes-Bug: #1698749
|
|
|
|
|
|
|
|
|
|
|
|
The previous fix Ib10e4f18d967d356a15b97f58c488f8402a73356 made
multinode CI pass, but there was still an error during volume
scheduling on OVB:
OSError: [Errno 13] Permission denied: '/var/lib/cinder/conversion'
This was most likely due to cinder-volume was running on host and used
host's cinder user, while we still deployed containerized
cinder-backup and it chowned /var/lib/cinder under kolla's cinder user
whose UID doesn't match the baremetal one.
We didn't hit this issue in the multinode job because it doesn't
presently deploy cinder-backup service at all.
Co-Authored-By: Martin André <m.andre@redhat.com>
Change-Id: I9ac74d6717533f59945694b4a43fe56d7ca768c6
Closes-Bug: #1698136
|
|
|
|
CI was stuck on collecting logs. The collect-logs playbook, which
normally takes just a few minutes, took more than an hour and was
eventually killed.
The playbook was stuck on collecting LVM info on the overcloud node,
which runs this command:
(vgs; pvs; lvs) &> /var/log/extra/lvm.txt
Therefore it's very likely that the problematic part is the LVM setup
in the containerized cinder-volume service, and falling back to
non-contianerized for the time being should get the CI going
again.
Change-Id: Ib10e4f18d967d356a15b97f58c488f8402a73356
Closes-Bug: #1698136
|
|
|
|
Depends-On: I5dc10ef5cccf6d378c20c68fc4a32d2d3c38233f
Change-Id: Ib96040c2e27ad76b1fa6ecb9468bb9d97b3c4518
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
The current port conflicts with trove. This is updated in puppet
module. See related change: https://review.openstack.org/#/c/471551/
Change-Id: Iefacb98320eef0bca782055e3da5d243993828d7
|
|
|
|
|
|
|
|
|
|
The file doesn't exist. The pxe setup is part of
puppet/services/ironic-conductor.yaml
Change-Id: I3a6f038ed69ea44f0594064b6f9657ff1b72e1bb
Closes-Bug: #1697927
|
|
|
|
When we merged If3989f24f077738845d2edbee405bd9198e7b7db we correctly
used name_lower for most things but we left out the the
OS::TripleO::Network resource which would cause errors like the
following:
Could not fetch contents for file:///tmp/tripleoclient-LdqQGJ/tripleo-heat-templates/network/internalapi.yaml
The reason is that the network filename is called internal_api.yaml.
Change-Id: I40f268668ed948e5d41ed0ff5a8fc954cef7b17c
Closes-Bug: #1697883
|
|
|
|
|
|
|
|
|
|
|
|
Depends-On: I3e865f2e9b6935eb3dfa4b4579c803f0127848ae
Change-Id: I09327a63d238a130b6ac0f2361f80e2b244b4b52
|
|
|
|
|
|
This service generates the /etc/my.cnf.d/tripleo.cnf file which is
being used to configured MySQL clients (e.g. client bind address,
client SSL configuration...)
We generate the config file in this service and let containerized MySQL clients
mount /var/lib/config-data/mysql_client/etc/my.cnf.d/tripleo.cnf it in their
own container. This way, when this MySQLClient service is updated, the other
containers will automatically pick the updated configuration at next restart.
Partial-Bug: #1692317
Change-Id: Idc56d27fb9645ad3b07df8ef08b7e2ce29e6d499
|
|
Depends-On: I037858a445742de58bd2f8d879f2b1272b07f481
Change-Id: Ifd138ea553a45a637a1a9fe3d0e946f8be51e119
|
|
Depends-On: I037858a445742de58bd2f8d879f2b1272b07f481
Change-Id: I808a5513decab1bd2cce949d05fd1acb17612a42
|
|
Currently there's some hard-coded references to roles here, rendering
from the roles_data.yaml is a step towards making the use of isolated
networks for custom roles easier.
Partial-Bug: #1633090
Depends-On: Ib681729cc2728ca4b0486c14166b6b702edfcaab
Change-Id: If3989f24f077738845d2edbee405bd9198e7b7db
|
|
In change I90253412a5e2cd8e56e74cce3548064c06d022b1 we merged
containerized HAProxy setup, but because of a typo in resource
registry, CI kept using the non-containerized variant and it went
unnoticed that the containerized HAProxy doesn't work yet.
We merged a resource registry fix in
Ibcbacff16c3561b75e29b48270d60b60c1eb1083 and it brought down the CI,
which now used the non-working HAProxy.
After putting in the missing haproxy container image to tripleo-common
in I41c1064bbf5f26c8819de6d241dd0903add1bbaa we got further, but the
CI still fails on HAProxy related problem, so we should revert back to
using non-containerized HAProxy for the time being.
Change-Id: If73bf28288de10812f430619115814494618860f
Closes-Bug: #1697645
|
|
Existing host_config_and_reboot.role.j2.yaml is done in ocata to
configure kernel args. This can be enhanced with use of role-specific
parameters, which is done in the current patch. The earlier method is
deprecated and will be removed in Q releae.
Implements: blueprint ovs-2-6-dpdk
Change-Id: Ib864f065527167a49a0f60812d7ad4ad12c836d1
|
|
|
|
As noted in the original patch review
I5e743f789ab7dd731bc7ad26226a92a4e71f95a1 the IronicInspectorAdmin
should be https.
Change-Id: I6e37427da679775f02ff0c5fe55cfee51c122e3d
|
|
Fix a bug that prevented these working. A unit test and
documentation for the nested environment functionality is also
included.
Change-Id: I2d4aeb584eb624178d601cfd6bc0a6473cb5289f
|