Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
|
|
Change-Id: Ibeb28d7c497b02253d00a74257989cefba2b0cc4
(cherry picked from commit fc44ee6ff3553754c618349df3be7544b17e9c5f)
|
|
For DPDK, vhost-user sockets are created on the host at
/var/lib/vhost_sockets directory, which will be used by
libvirt and openvswitch. This directory has the necessary
permissions and SELinux policies. Mount this folder for
libvirt container.
Change-Id: Id8be208d1b05886ac45dfdcf48fe766ee5724d1c
Partial-Bug: #1712732
(cherry picked from commit 3ea04744c22ae4cd2e1f2b77fc7d5ade012899e0)
|
|
|
|
|
|
stable/pike
|
|
Patch Ie09ce2a52128eef157e4d768c1c4776fc49f2324 added a new
set of upgrade tasks which were missing the 'tags' keyword.
Closes-Bug: 1715631
Change-Id: Ib1c1aadfbf58c9bccc18667934c8b3c5f38fafa4
(cherry picked from commit 7897d38274cb6435289bc4f4928f96b111e5b4f4)
|
|
This patch adds support for running the neutron SR-IOV agent in a
container.
Depends-On: I4a63845a97c890d7d408731ec5509c320289f18f
Depends-On: Ie5d8cd7863c0d042cc6a4e1fc52602d8a03a1935
Depends-On: I1b5ab0a64ae1f5735f1bd5a68e6ae8bdcf47ddec
Closes-Bug: #1715388
Change-Id: I7ee603b32eddacd02d846dff00dd1b786d4a7ad9
(cherry picked from commit 94c9c2f954e85de0ab895926a969587b90bc4191)
|
|
Previously it was only possible to configure the overcloud with
an external Ceph cluster via puppet-ceph-external.
This submission adds a CephExternal implementation which uses
ceph-ansible.
Change-Id: Id0d375f88e27e91e9d89f25a0cd7388b6e45df8b
Depends-On: Ifc57c9cf6ca8017a2abc78d6320c0675ad49ca9f
Closes-Bug: #1714271
(cherry picked from commit 01e55c314de74579196518d958bf5be30e390409)
|
|
This patch allows usage of ceph-ansible to configure the RGW service
in the overcloud. Still uses puppet-keystone to create the necessary
user and endpoint in the catalog.
Co-Authored-By: Giulio Fidente <gfidente@redhat.com>
Change-Id: Iafa17bb64c54e40350b2ba7d76dea3d82fcab0e4
(cherry picked from commit 5b3cd1dcacff408bcb482bdea6cded8755a39ebb)
|
|
|
|
|
|
|
|
|
|
|
|
It's being mounted on the actual haproxy container, but not the init
one.
Change-Id: I66b69e0bb3642dbfeec767ef5216d515786b5b19
Closes-Bug: #1715132
(cherry picked from commit 03622e89ac3037b4d69d913586823e689b210688)
|
|
journal and snapshots folders hold data needed for update. This
patch mounts these folders and adds ODL log file in
/var/log/containers/opendaylight
Change-Id: I65c6183c2867b2ced6e6ef25896d80154857b7dc
Closes:Bug: #1714231
(cherry picked from commit 81dd0808d2a180d108f1159bc67f345fe6bf27d4)
|
|
|
|
|
|
We do not want a default value for the container image name parameters
and expect deployers to set this value instead.
Change-Id: I9377b7c3564360353aa6da2d2457b2cfacd4e9d6
Closes-Bug: #1714221
(cherry picked from commit fcc3259891ee67956d63c37217acdb999bc4bb65)
|
|
Redis does not have TLS out of the box. Let's use a proxy container for
TLS termination.
bp tls-via-certmonger
Co-Authored-By: Juan Antonio Osorio Robles <jaosorior@redhat.com>
Change-Id: Ie2ae0d048a71e1b1b4edb10c74bc0395a1a9d5c9
Depends-On: I078567c831ade540cf704f81564e2b7654c85c0b
Depends-On: Ia50933da9e59268b17f56db34d01dcc6b6c38147
(cherry picked from commit c2a93cf4c5d9d6b5ee0536380751a7a9540927cc)
|
|
This change removes the entry to containerise docker by default
because it should now be disabled since the change
Id2e6550fb7c319fc52469644ea022cf35757e0ce.
Removing the entry means the default mapping to mongodb-disabled.yaml
takes effect.
This change also modifies the upgrade_tasks so that the mongod service
is only disabled when the service exists. There appears to be upgrade
scenarios which fail because mongodb was never installed in the first
place.
Change-Id: Ie09ce2a52128eef157e4d768c1c4776fc49f2324
Closes-Bug: #1715031
(cherry picked from commit cb81cbe3b5f3887f5d690c590e52b728f74d43c3)
|
|
Capabilities were not properly escaped and ignored by ceph.
Change-Id: I099c3d9bad95ec69ac85fe406e3e1d4685ede439
Closes: #1713928
|
|
Currently for non controller upgrades we're looping through the
upgrade steps and run the upgrade tasks based on when conditionals
including the step number and the existing upgrade task condition.
Some of tasks fail because the variables used in when conditionals
are not available through all steps. This change adds default values
to these vars where possible or creates them for all steps to avoid
failures.
Related-Bug: 1708115
Change-Id: I5c731043cec8e31fc82ca98972a301baa7294c4f
(cherry picked from commit e2f00ef1dc98140087c81e202a520f549f9a0970)
|
|
Use a more restrictive mode for these files, as some may contain sensitive data
which shouldn't be world readable
Closes-Bug: #1714986
Change-Id: Ib1e79b1d4e25d6e329938402b1ca776bdab81bdd
(cherry picked from commit 94c7752cfae64d96124a32bc36ccd6ec7b4df4a7)
|
|
|
|
|
|
|
|
Get the path from the CONFIG_VOLUME_PREFIX environment variable.
This is useful for debugging and generate configuration files to
a different directory.
Change-Id: Ib85e3898804312ebb6677a5fa189fbfc357ce27c
(cherry picked from commit 0c62b6cd8d696befb1c0c31bb6e206199ce1edac)
|
|
Correct the zaqar service name to match the bootstrap host id name
Closes-bug: #1714253
Change-Id: Iced8f3a7e64d9023bd46a50629a56e087d1f6f24
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
(cherry picked from commit d782f687cb7794e0491c0d0f6dc3d9b28196dc96)
|
|
Use a separate config_volume for swift_ringbuilder puppet_config tasks.
This is necessary so that the swift_ringbuilder and swift-storage
services don't both rsync files to the same bind mounted directory.
The rsync command from docker-puppet.py uses --delete-after, so when
they both use the same config_volume, they can end up deleting the files
generated by the other (depending on the order of execution).
Even though a separate config_volume is used, the rings must still end up
in /etc/swift for the swift services containers. An additional
container init task is used to copy the ring files into
/var/lib/config-data/puppet-generated/swift/etc/swift so that they will
be present when the actual swift services containers are started.
Change-Id: I05821e76191f64212704ca8e3b7428cda6b3a4b7
Closes-Bug: #1710952
(cherry picked from commit cba00abb7517efa6a8d9b8fb954563204323ffed)
|
|
The docker _cron services show up as (unhealthy) due to
them sharing the containers for the OpenStack services.
As such we need to manually override the health checks
for these services. By setting them to /bin/true
the services should show up has healthy.
Change-Id: I46e12bcec226fbe2768c7fe8f0e7719df46401a9
Closes-bug: #1713183
(cherry picked from commit d1aaf0aadf487ccfcdecb47f3cfbf6087401242b)
|
|
stable/pike
|
|
For bug 1708115 and the O..P upgrade, and for the upgrade of
'non-controlers' we are now generating ansible playbooks from
collected service upgrade_tasks and these are executed instead
of the legacy tripleo_upgrade_node.sh.
To clarify, by 'non-controllers' it is meant any node for which
the corresponding roles_data.yaml role has the
disable_upgrade_deployment flag set True.
As a first pass, I am removing the workarounds from the script but
keeping its delivery mechanism for now in case it is needed still.
We can either update here to remove it or keep it until next cycle
The most important part for now is that we no longer 'manually'
run puppet here. Instead the post_deploy_steps are also collected
into a playbook and will be executed after the upgrade_tasks
(see the bug for discussion of the mechanism and related reviews)
Change-Id: Ib017b0ab435ca9558cf8659d434489cdf01df955
Related-Bug: 1708115
(cherry picked from commit 4c5b9c5c967105536106fa4a7e1ec2352b14b08c)
|
|
Depending on the version of mariadb/galera installed the mysql_bootstrap
command might fail. With the following unrevealing error:
openstack-mariadb-docker:2017-08-28.10 "bash -ec 'if [ -e /v" 3 hours ago Exited (124) 3 hours ago
The timeout is actually due to the fact that the following snippets does
not complete within 60 seconds:
"""
if [ -e /var/lib/mysql/mysql ]; then exit 0; fi
kolla_start
mysqld_safe --skip-networking --wsrep-on=OFF &
timeout ${DB_MAX_TIMEOUT} /bin/bash -c ''until mysqladmin -uroot -p"${DB_ROOT_PASSWORD}" ping 2>/dev/null; do sleep 1; done''
mysql -uroot -p"${DB_ROOT_PASSWORD}" -e "CREATE USER ''clustercheck''@''localhost'' IDENTIFIED BY '${DB_CLUSTERCHECK_PASSWORD}'';"
mysql -uroot -p"${DB_ROOT_PASSWORD}" -e "GRANT PROCESS ON *.* TO ''clustercheck'
"""
The problem is that with older mariadb versions:
galera-25.3.16-3.el7ost.x86_64
mariadb-5.5.56-2.el7.x86_64
The mysqld_safe process starts in galera mode (as opposed as to single
local mode):
170830 17:03:05 [Note] WSREP: Start replication
170830 17:03:05 [Note] WSREP: GMCast version 0
...
170830 17:03:05 [ERROR] WSREP: wsrep::connect() failed: 7
170830 17:03:05 [ERROR] Aborting
That means that even though we specified --wsrep-on=OFF it is still
starting in cluster mode. Let's add the extra --wsrep-provider=none
which older versions required.
Let's also add a '-x' to this transient container as that
would have helped a bit because we would have understood right away
that it was mysqld_safe that was not starting. I tested this
successfully on an environment that showed the problem. The new
option is still accepted by newer DB versions in any case.
Closes-Bug: #1714057
Change-Id: Icf67fd2fbf520e8a62405b4d49e8d5169ff3925b
Co-Authored-By: Mike Bayer <mbayer@redhat.com>
(cherry picked from commit c19968ca852ab608513fe692aab958af25276220)
|
|
ovn-dbs pacemaker bundle resources are created for supporting
Master/Slave HA. puppet-tripleo already supports creating
ovn-dbs bundle resources. The heat template added in this patch
makes use of this.
Closes-bug: #1699085
Change-Id: I23c2d312cfb144f9afc14f0982a92670dc29d74c
(cherry picked from commit 444a39f5983e71e3222b6b7f8f523fce60aeece7)
|
|
|
|
ceph-ansible" into stable/pike
|
|
|
|
Currently the container neutron-ovs-agent is stuck in a restart loop
in many environments because the bridge br-ex is missing.
This bridge is created by running the puppet class
neutron::agents::ml2::ovs but limiting that run to tag
neutron::plugins::ovs::bridge.
The hiera neutron::agents::ml2::ovs::bridge_mappings should already
exists to create the bridge with the required settings.
This change should ensure br-ex exists after step 3.
Since br-ex is created regardless of the chosen network config,
environments/docker-network.yaml is not longer required. It can be
deleted once there are no more references to it in CI and
documentation.
Change-Id: Ie425148b0ad0f38e149c5fa0a97d98ec35d0a5bb
Closes-Bug: #1699261
Closes-Bug: #1691403
Closes-Bug: #1689556
(cherry picked from commit 76f130d6e8f7434433b2602af9794f1e9c742e1f)
|
|
Pacemaker puppet module takes care of mounting /etc/ceph into
manila-share container (I23b6890b4cf7f1e6fe84b6be280dde82218275fc).
Change-Id: I1026b2436275b17cfe3ac85192d42c5268f0a630
Related-To: I23b6890b4cf7f1e6fe84b6be280dde82218275fc
(cherry picked from commit 0d8040ca33d42dbb7e06162f2b659ff6cbc0316f)
|
|
On upgrade we need to run a specific playbook for ceph-ansible
to be able to take over the pre-existing Ceph cluster deployed with
puppet-ceph and the migrate it into a containerized deployment.
This changes the playbook we use on upgrade so that it migrates
the cluster in containers in addition to taking over the cluster.
Change-Id: I353c219832c41328f298fa7b65768ecf26c37f29
(cherry picked from commit cab266c9b2b62c0033f8fb66e8e61b7aa46b3e2b)
|
|
|
|
Change-Id: Idf627a348cad8d5287c82cb393367210f1c760cf
Closes-bug: #1713185
(cherry picked from commit 20e1f0e8c9a2bbc3734f6eec0ee9ac2d5156f166)
|
|
This patch adds the support to containerize OVN services for the
base profile.
OVN db servers do not support active-active mode yet. It does support
master-slave mode supported through pacemaker, which will be supported
in a later patch.
Presently the tripleo container framework doesn't allow to start a
container in only controller 0 (or bootstrap node). OVN db servers and
ovn-northd are started on all the controllers, but only the OVN db
servers running in the boot strap controller are configured to listen
on the tcp ports 6641 and 6642. OVN neutron mechanism driver
and ovn-controller's use the ovn_dbs_vip to connect to the OVN db servers.
Haproxy configures all the controllers as back ends, but only OVN db
servers running on controller 0 respond since only they are configured
properly.
The OVN containers running on other controller nodes do not interact
any way, but are wasteful resources.
This patch also adds the scenario007-multinode-containers CI template.
Partial-bug: #1699085
Change-Id: I98b85191cc1fd8c2b166924044d704e79a4c4c8a
(cherry picked from commit e7cd03d2f0fcd8e3069246ced94f1a83869b8bea)
|
|
This containerises Barbican API in TripleO
Change-Id: Icc5e9841ea48c806af4db61cd6de5e9a7a40a988
Partial-Bug: 1668924
Depends-On: I6b5ec18ccdd51b90ff27ff7d4341260dfba71e4e
(cherry picked from commit 6d338b809accea4d3ba09ca8363b1a97ed79b658)
|
|
|
|
|
|
|