aboutsummaryrefslogtreecommitdiffstats
path: root/docker
AgeCommit message (Collapse)AuthorFilesLines
2017-09-06TLS proxy for redisMartin André1-23/+64
Redis does not have TLS out of the box. Let's use a proxy container for TLS termination. bp tls-via-certmonger Co-Authored-By: Juan Antonio Osorio Robles <jaosorior@redhat.com> Change-Id: Ie2ae0d048a71e1b1b4edb10c74bc0395a1a9d5c9 Depends-On: I078567c831ade540cf704f81564e2b7654c85c0b Depends-On: Ia50933da9e59268b17f56db34d01dcc6b6c38147 (cherry picked from commit c2a93cf4c5d9d6b5ee0536380751a7a9540927cc)
2017-09-06Containerized mongodb, disable by default, fix upgradeSteve Baker1-0/+4
This change removes the entry to containerise docker by default because it should now be disabled since the change Id2e6550fb7c319fc52469644ea022cf35757e0ce. Removing the entry means the default mapping to mongodb-disabled.yaml takes effect. This change also modifies the upgrade_tasks so that the mongod service is only disabled when the service exists. There appears to be upgrade scenarios which fail because mongodb was never installed in the first place. Change-Id: Ie09ce2a52128eef157e4d768c1c4776fc49f2324 Closes-Bug: #1715031 (cherry picked from commit cb81cbe3b5f3887f5d690c590e52b728f74d43c3)
2017-09-06Escape ceph capabilities for manila clientJan Provaznik1-1/+1
Capabilities were not properly escaped and ignored by ceph. Change-Id: I099c3d9bad95ec69ac85fe406e3e1d4685ede439 Closes: #1713928
2017-09-06Allow upgrade tasks to run when looping through stepsMarius Cornea1-2/+2
Currently for non controller upgrades we're looping through the upgrade steps and run the upgrade tasks based on when conditionals including the step number and the existing upgrade task condition. Some of tasks fail because the variables used in when conditionals are not available through all steps. This change adds default values to these vars where possible or creates them for all steps to avoid failures. Related-Bug: 1708115 Change-Id: I5c731043cec8e31fc82ca98972a301baa7294c4f (cherry picked from commit e2f00ef1dc98140087c81e202a520f549f9a0970)
2017-09-05Set mode for ansible written filesSteven Hardy1-0/+1
Use a more restrictive mode for these files, as some may contain sensitive data which shouldn't be world readable Closes-Bug: #1714986 Change-Id: Ib1e79b1d4e25d6e329938402b1ca776bdab81bdd (cherry picked from commit 94c7752cfae64d96124a32bc36ccd6ec7b4df4a7)
2017-09-04Merge "Stop hardcoding host's config volume path" into stable/pikeJenkins1-1/+1
2017-09-04Merge "Manually set healthchecks for _cron services" into stable/pikeJenkins4-0/+8
2017-09-04Merge "Fix containerized zaqar-api db_sync" into stable/pikeJenkins1-1/+2
2017-09-04Stop hardcoding host's config volume pathMartin André1-1/+1
Get the path from the CONFIG_VOLUME_PREFIX environment variable. This is useful for debugging and generate configuration files to a different directory. Change-Id: Ib85e3898804312ebb6677a5fa189fbfc357ce27c (cherry picked from commit 0c62b6cd8d696befb1c0c31bb6e206199ce1edac)
2017-09-03Fix containerized zaqar-api db_syncBogdan Dobrelya1-1/+2
Correct the zaqar service name to match the bootstrap host id name Closes-bug: #1714253 Change-Id: Iced8f3a7e64d9023bd46a50629a56e087d1f6f24 Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com> (cherry picked from commit d782f687cb7794e0491c0d0f6dc3d9b28196dc96)
2017-09-02Separate config_volume for ringbuilderJames Slagle1-3/+20
Use a separate config_volume for swift_ringbuilder puppet_config tasks. This is necessary so that the swift_ringbuilder and swift-storage services don't both rsync files to the same bind mounted directory. The rsync command from docker-puppet.py uses --delete-after, so when they both use the same config_volume, they can end up deleting the files generated by the other (depending on the order of execution). Even though a separate config_volume is used, the rings must still end up in /etc/swift for the swift services containers. An additional container init task is used to copy the ring files into /var/lib/config-data/puppet-generated/swift/etc/swift so that they will be present when the actual swift services containers are started. Change-Id: I05821e76191f64212704ca8e3b7428cda6b3a4b7 Closes-Bug: #1710952 (cherry picked from commit cba00abb7517efa6a8d9b8fb954563204323ffed)
2017-09-02Manually set healthchecks for _cron servicesDan Prince4-0/+8
The docker _cron services show up as (unhealthy) due to them sharing the containers for the OpenStack services. As such we need to manually override the health checks for these services. By setting them to /bin/true the services should show up has healthy. Change-Id: I46e12bcec226fbe2768c7fe8f0e7719df46401a9 Closes-bug: #1713183 (cherry picked from commit d1aaf0aadf487ccfcdecb47f3cfbf6087401242b)
2017-09-01Merge "Add --wsrep-provider=none to the mysql_bootstrap container" into ↵Jenkins1-2/+2
stable/pike
2017-08-31Remove puppet run and workarounds from tripleo_upgrade_node.shmarios2-1/+22
For bug 1708115 and the O..P upgrade, and for the upgrade of 'non-controlers' we are now generating ansible playbooks from collected service upgrade_tasks and these are executed instead of the legacy tripleo_upgrade_node.sh. To clarify, by 'non-controllers' it is meant any node for which the corresponding roles_data.yaml role has the disable_upgrade_deployment flag set True. As a first pass, I am removing the workarounds from the script but keeping its delivery mechanism for now in case it is needed still. We can either update here to remove it or keep it until next cycle The most important part for now is that we no longer 'manually' run puppet here. Instead the post_deploy_steps are also collected into a playbook and will be executed after the upgrade_tasks (see the bug for discussion of the mechanism and related reviews) Change-Id: Ib017b0ab435ca9558cf8659d434489cdf01df955 Related-Bug: 1708115 (cherry picked from commit 4c5b9c5c967105536106fa4a7e1ec2352b14b08c)
2017-08-31Add --wsrep-provider=none to the mysql_bootstrap containerMichele Baldessari1-2/+2
Depending on the version of mariadb/galera installed the mysql_bootstrap command might fail. With the following unrevealing error: openstack-mariadb-docker:2017-08-28.10 "bash -ec 'if [ -e /v" 3 hours ago Exited (124) 3 hours ago The timeout is actually due to the fact that the following snippets does not complete within 60 seconds: """ if [ -e /var/lib/mysql/mysql ]; then exit 0; fi kolla_start mysqld_safe --skip-networking --wsrep-on=OFF & timeout ${DB_MAX_TIMEOUT} /bin/bash -c ''until mysqladmin -uroot -p"${DB_ROOT_PASSWORD}" ping 2>/dev/null; do sleep 1; done'' mysql -uroot -p"${DB_ROOT_PASSWORD}" -e "CREATE USER ''clustercheck''@''localhost'' IDENTIFIED BY '${DB_CLUSTERCHECK_PASSWORD}'';" mysql -uroot -p"${DB_ROOT_PASSWORD}" -e "GRANT PROCESS ON *.* TO ''clustercheck' """ The problem is that with older mariadb versions: galera-25.3.16-3.el7ost.x86_64 mariadb-5.5.56-2.el7.x86_64 The mysqld_safe process starts in galera mode (as opposed as to single local mode): 170830 17:03:05 [Note] WSREP: Start replication 170830 17:03:05 [Note] WSREP: GMCast version 0 ... 170830 17:03:05 [ERROR] WSREP: wsrep::connect() failed: 7 170830 17:03:05 [ERROR] Aborting That means that even though we specified --wsrep-on=OFF it is still starting in cluster mode. Let's add the extra --wsrep-provider=none which older versions required. Let's also add a '-x' to this transient container as that would have helped a bit because we would have understood right away that it was mysqld_safe that was not starting. I tested this successfully on an environment that showed the problem. The new option is still accepted by newer DB versions in any case. Closes-Bug: #1714057 Change-Id: Icf67fd2fbf520e8a62405b4d49e8d5169ff3925b Co-Authored-By: Mike Bayer <mbayer@redhat.com> (cherry picked from commit c19968ca852ab608513fe692aab958af25276220)
2017-08-31Support HA for OVN DBs containers using pacemaker bundleNuman Siddique1-0/+140
ovn-dbs pacemaker bundle resources are created for supporting Master/Slave HA. puppet-tripleo already supports creating ovn-dbs bundle resources. The heat template added in this patch makes use of this. Closes-bug: #1699085 Change-Id: I23c2d312cfb144f9afc14f0982a92670dc29d74c (cherry picked from commit 444a39f5983e71e3222b6b7f8f523fce60aeece7)
2017-08-31Merge "Remove src_ceph from manila kolla_config" into stable/pikeJenkins1-5/+0
2017-08-31Merge "Use switch to containers instead of take over playbook for ↵Jenkins1-1/+2
ceph-ansible" into stable/pike
2017-08-30Merge "Set docker-puppet --health-cmd = /bin/true" into stable/pikeJenkins1-0/+1
2017-08-30container ovs-agent, ensure br-ex existsSteve Baker1-0/+31
Currently the container neutron-ovs-agent is stuck in a restart loop in many environments because the bridge br-ex is missing. This bridge is created by running the puppet class neutron::agents::ml2::ovs but limiting that run to tag neutron::plugins::ovs::bridge. The hiera neutron::agents::ml2::ovs::bridge_mappings should already exists to create the bridge with the required settings. This change should ensure br-ex exists after step 3. Since br-ex is created regardless of the chosen network config, environments/docker-network.yaml is not longer required. It can be deleted once there are no more references to it in CI and documentation. Change-Id: Ie425148b0ad0f38e149c5fa0a97d98ec35d0a5bb Closes-Bug: #1699261 Closes-Bug: #1691403 Closes-Bug: #1689556 (cherry picked from commit 76f130d6e8f7434433b2602af9794f1e9c742e1f)
2017-08-30Remove src_ceph from manila kolla_configJan Provaznik1-5/+0
Pacemaker puppet module takes care of mounting /etc/ceph into manila-share container (I23b6890b4cf7f1e6fe84b6be280dde82218275fc). Change-Id: I1026b2436275b17cfe3ac85192d42c5268f0a630 Related-To: I23b6890b4cf7f1e6fe84b6be280dde82218275fc (cherry picked from commit 0d8040ca33d42dbb7e06162f2b659ff6cbc0316f)
2017-08-30Use switch to containers instead of take over playbook for ceph-ansibleGiulio Fidente1-1/+2
On upgrade we need to run a specific playbook for ceph-ansible to be able to take over the pre-existing Ceph cluster deployed with puppet-ceph and the migrate it into a containerized deployment. This changes the playbook we use on upgrade so that it migrates the cluster in containers in addition to taking over the cluster. Change-Id: I353c219832c41328f298fa7b65768ecf26c37f29 (cherry picked from commit cab266c9b2b62c0033f8fb66e8e61b7aa46b3e2b)
2017-08-30Merge "Support deploying OVN as container services" into stable/pikeJenkins2-0/+307
2017-08-29Set docker-puppet --health-cmd = /bin/trueDan Prince1-0/+1
Change-Id: Idf627a348cad8d5287c82cb393367210f1c760cf Closes-bug: #1713185 (cherry picked from commit 20e1f0e8c9a2bbc3734f6eec0ee9ac2d5156f166)
2017-08-28Support deploying OVN as container servicesNuman Siddique2-0/+307
This patch adds the support to containerize OVN services for the base profile. OVN db servers do not support active-active mode yet. It does support master-slave mode supported through pacemaker, which will be supported in a later patch. Presently the tripleo container framework doesn't allow to start a container in only controller 0 (or bootstrap node). OVN db servers and ovn-northd are started on all the controllers, but only the OVN db servers running in the boot strap controller are configured to listen on the tcp ports 6641 and 6642. OVN neutron mechanism driver and ovn-controller's use the ovn_dbs_vip to connect to the OVN db servers. Haproxy configures all the controllers as back ends, but only OVN db servers running on controller 0 respond since only they are configured properly. The OVN containers running on other controller nodes do not interact any way, but are wasteful resources. This patch also adds the scenario007-multinode-containers CI template. Partial-bug: #1699085 Change-Id: I98b85191cc1fd8c2b166924044d704e79a4c4c8a (cherry picked from commit e7cd03d2f0fcd8e3069246ced94f1a83869b8bea)
2017-08-25Containarise Barbican APIJanki Chhatbar1-0/+154
This containerises Barbican API in TripleO Change-Id: Icc5e9841ea48c806af4db61cd6de5e9a7a40a988 Partial-Bug: 1668924 Depends-On: I6b5ec18ccdd51b90ff27ff7d4341260dfba71e4e (cherry picked from commit 6d338b809accea4d3ba09ca8363b1a97ed79b658)
2017-08-24Merge "Remove baremetal cron jobs on docker upgrade"Jenkins4-0/+16
2017-08-24Merge "Docker: Enable TLS in the internal network for libvirt"Jenkins1-1/+16
2017-08-23Merge "More fixes for the Ceph docker images url parsing"Jenkins1-2/+2
2017-08-23Merge "docker: Stop all active ceilometer services during compute upgrade"Jenkins1-1/+16
2017-08-23Docker: Enable TLS in the internal network for libvirtJuan Antonio Osorio Robles1-1/+16
Bind mounts the necessary certs and keys to enable live migrations using TLS. bp tls-via-certmonger-containers Depends-On: I26a7748b37059ea37f460d8c70ef684cc41b16d3 Change-Id: I81efa85d916823f740bf320c88a248403743a45b
2017-08-22Merge "Zaqar: Match service name with service-net-map"Jenkins1-1/+1
2017-08-22Zaqar: Match service name with service-net-mapJuan Antonio Osorio Robles1-1/+1
This is required for t-h-t to generate the appropriate hieradata. Change-Id: I9b451eac4427a52ad8eec62ff89acc6c6d3ab799 Closes-Bug: #1712328
2017-08-22Fix configuration files path for logrotate containerMartin André1-1/+1
The config_volume is named 'crond', and so must me the path to puppet-generated directory. Change-Id: I13b4ad7642ddf3bc5d1f4aa979b4a91a89605fb1 Closes-Bug: #1712300
2017-08-21Merge "Add logrotate with crond service"Jenkins1-0/+84
2017-08-21Merge "Let mds create manila key and fs"Jenkins2-2/+2
2017-08-21TLS for containerized horizonJuan Antonio Osorio Robles1-0/+17
bind mount the certificates needed for TLS. bp tls-via-certmonger-containers Change-Id: Ib9b533249be37665b77396a76133cc42fd15ee2b
2017-08-21Merge "Enable TLS for containerized RabbitMQ"Jenkins1-0/+51
2017-08-21Add logrotate with crond serviceBogdan Dobrelya1-0/+84
Add a docker service template to provide containerized services logs rotation with a crond job. Add OS::TripleO::Services::LogrotateCrond to CI multinode-containers and to all environments among with generic services like Ntp or Kernel. Set it to OS::Heat::None for non containerized environments and only enable it to the environments/docker.yaml. Closes-bug: #1700912 Change-Id: Ic94373f0a0758e9959e1f896481780674437147d Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
2017-08-19Merge "Mount ceph config on gnocchi statsd"Jenkins1-0/+9
2017-08-19Merge "Swith to the appropriate ceph-ansible playbook on upgrade"Jenkins1-1/+19
2017-08-19Merge "Convert scenario001-multinode-containers job to ceph-ansible"Jenkins1-0/+5
2017-08-19Merge "Add params needed for the ceph-ansible switch to containers playbook"Jenkins1-0/+1
2017-08-19Merge "Tag the ha containers with 'pcmklatest' at deploy time"Jenkins7-18/+221
2017-08-18Mount ceph config on gnocchi statsdPradeep Kilambi1-0/+9
gnocchi-statsd needs access to ceph config. lets mount the ceph config files so it doesnt throw conf_read_file errors. Change-Id: I1426d580c8d8d60e986ca859f89eeb8799ab6bd2
2017-08-18More fixes for the Ceph docker images url parsingGiulio Fidente1-2/+2
Existing code was still failing the following scenario: http://192.168.24.1:8787/ceph/rhceph-2-rhel7:latest Now this has been tested with the following variations: http://192.168.24.1:8787/ceph/rhceph-2-rhel7:latest http://192.168.24.1:8787/rhceph-2-rhel7:latest 192.168.24.1:8787/ceph/rhceph-2-rhel7:latest 192.168.24.1:8787/rhceph-2-rhel7:latest 192.168.24.1/ceph/daemon:latest And then the same list without the custom registry host. Change-Id: Ifc871de8c2678f6a6fc5d234bfb62e8273c1b0b7
2017-08-18Merge "Restore and split nova metadata docker service out of nova-api."Jenkins1-5/+61
2017-08-18Let mds create manila key and fsJan Provaznik2-2/+2
ceph-ansible will take care of setting up client keys both in ceph and on client side. It will also create filesystem for manila. To assure that manila manifest can work in future both with puppet and with ceph-ansible, creation of filesystem is moved to ceph-mds manifest and creation of manila key on ceph side is moved to ceph-base (so manila key is always created), manila key is added to ceph-external for external ceph deployments. Key creation is removed from manila.pp in patch I2b5567a39ac8737e80758b705818cc1807dc8bf1 Change-Id: I6308a317ffe0af244396aba5197c85e273e69f68 Related-To: Ia3ef9e9a2b159dacea01e38762145ff2bcc7ba27 Depends-On: I3f18bbe476c4f43fa4e162cc66c5df443122cd0c
2017-08-18Tag the ha containers with 'pcmklatest' at deploy timeMichele Baldessari7-18/+221
We need to tag the HA containers with a special tag so that the RA definition never changes. We do this step in THT as opposed to puppet because we need to guarantee that all images are tagged on all nodes *before* step 2 where the bundle gets created. NB: Getting the image name without the tag will require some more yaql work to get all the cases right. Right now this works only if we enforce that the image has a ':tag' at the end of the name. So far this is always the case. If things change we will need to amend this code. Co-Authored-By: Damien Ciabrini <dciabrin@redhat.com> Co-Authored-By: Sofer Athlan-Guyot <sathlang@redhat.com> Change-Id: I362e6cf26fba77d3f949b7d2fc4b35a3eab9087e
2017-08-18Enable TLS for containerized RabbitMQJuan Antonio Osorio Robles1-0/+51
Bind mounts and adds the appropriate permissions for the cert and key that's used for TLS. bp tls-via-certmonger-containers Depends-On: I62ff89362cfcc80e6e62fad09110918c36802813 Change-Id: I48325893a00690e2f5d6f1d685f903234545d5b8