summaryrefslogtreecommitdiffstats
path: root/docker/services
AgeCommit message (Collapse)AuthorFilesLines
2017-09-03Fix containerized zaqar-api db_syncBogdan Dobrelya1-1/+2
Correct the zaqar service name to match the bootstrap host id name Closes-bug: #1714253 Change-Id: Iced8f3a7e64d9023bd46a50629a56e087d1f6f24 Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com> (cherry picked from commit d782f687cb7794e0491c0d0f6dc3d9b28196dc96)
2017-09-01Merge "Add --wsrep-provider=none to the mysql_bootstrap container" into ↵Jenkins1-2/+2
stable/pike
2017-08-31Remove puppet run and workarounds from tripleo_upgrade_node.shmarios2-1/+22
For bug 1708115 and the O..P upgrade, and for the upgrade of 'non-controlers' we are now generating ansible playbooks from collected service upgrade_tasks and these are executed instead of the legacy tripleo_upgrade_node.sh. To clarify, by 'non-controllers' it is meant any node for which the corresponding roles_data.yaml role has the disable_upgrade_deployment flag set True. As a first pass, I am removing the workarounds from the script but keeping its delivery mechanism for now in case it is needed still. We can either update here to remove it or keep it until next cycle The most important part for now is that we no longer 'manually' run puppet here. Instead the post_deploy_steps are also collected into a playbook and will be executed after the upgrade_tasks (see the bug for discussion of the mechanism and related reviews) Change-Id: Ib017b0ab435ca9558cf8659d434489cdf01df955 Related-Bug: 1708115 (cherry picked from commit 4c5b9c5c967105536106fa4a7e1ec2352b14b08c)
2017-08-31Add --wsrep-provider=none to the mysql_bootstrap containerMichele Baldessari1-2/+2
Depending on the version of mariadb/galera installed the mysql_bootstrap command might fail. With the following unrevealing error: openstack-mariadb-docker:2017-08-28.10 "bash -ec 'if [ -e /v" 3 hours ago Exited (124) 3 hours ago The timeout is actually due to the fact that the following snippets does not complete within 60 seconds: """ if [ -e /var/lib/mysql/mysql ]; then exit 0; fi kolla_start mysqld_safe --skip-networking --wsrep-on=OFF & timeout ${DB_MAX_TIMEOUT} /bin/bash -c ''until mysqladmin -uroot -p"${DB_ROOT_PASSWORD}" ping 2>/dev/null; do sleep 1; done'' mysql -uroot -p"${DB_ROOT_PASSWORD}" -e "CREATE USER ''clustercheck''@''localhost'' IDENTIFIED BY '${DB_CLUSTERCHECK_PASSWORD}'';" mysql -uroot -p"${DB_ROOT_PASSWORD}" -e "GRANT PROCESS ON *.* TO ''clustercheck' """ The problem is that with older mariadb versions: galera-25.3.16-3.el7ost.x86_64 mariadb-5.5.56-2.el7.x86_64 The mysqld_safe process starts in galera mode (as opposed as to single local mode): 170830 17:03:05 [Note] WSREP: Start replication 170830 17:03:05 [Note] WSREP: GMCast version 0 ... 170830 17:03:05 [ERROR] WSREP: wsrep::connect() failed: 7 170830 17:03:05 [ERROR] Aborting That means that even though we specified --wsrep-on=OFF it is still starting in cluster mode. Let's add the extra --wsrep-provider=none which older versions required. Let's also add a '-x' to this transient container as that would have helped a bit because we would have understood right away that it was mysqld_safe that was not starting. I tested this successfully on an environment that showed the problem. The new option is still accepted by newer DB versions in any case. Closes-Bug: #1714057 Change-Id: Icf67fd2fbf520e8a62405b4d49e8d5169ff3925b Co-Authored-By: Mike Bayer <mbayer@redhat.com> (cherry picked from commit c19968ca852ab608513fe692aab958af25276220)
2017-08-31Merge "Remove src_ceph from manila kolla_config" into stable/pikeJenkins1-5/+0
2017-08-31Merge "Use switch to containers instead of take over playbook for ↵Jenkins1-1/+2
ceph-ansible" into stable/pike
2017-08-30container ovs-agent, ensure br-ex existsSteve Baker1-0/+31
Currently the container neutron-ovs-agent is stuck in a restart loop in many environments because the bridge br-ex is missing. This bridge is created by running the puppet class neutron::agents::ml2::ovs but limiting that run to tag neutron::plugins::ovs::bridge. The hiera neutron::agents::ml2::ovs::bridge_mappings should already exists to create the bridge with the required settings. This change should ensure br-ex exists after step 3. Since br-ex is created regardless of the chosen network config, environments/docker-network.yaml is not longer required. It can be deleted once there are no more references to it in CI and documentation. Change-Id: Ie425148b0ad0f38e149c5fa0a97d98ec35d0a5bb Closes-Bug: #1699261 Closes-Bug: #1691403 Closes-Bug: #1689556 (cherry picked from commit 76f130d6e8f7434433b2602af9794f1e9c742e1f)
2017-08-30Remove src_ceph from manila kolla_configJan Provaznik1-5/+0
Pacemaker puppet module takes care of mounting /etc/ceph into manila-share container (I23b6890b4cf7f1e6fe84b6be280dde82218275fc). Change-Id: I1026b2436275b17cfe3ac85192d42c5268f0a630 Related-To: I23b6890b4cf7f1e6fe84b6be280dde82218275fc (cherry picked from commit 0d8040ca33d42dbb7e06162f2b659ff6cbc0316f)
2017-08-30Use switch to containers instead of take over playbook for ceph-ansibleGiulio Fidente1-1/+2
On upgrade we need to run a specific playbook for ceph-ansible to be able to take over the pre-existing Ceph cluster deployed with puppet-ceph and the migrate it into a containerized deployment. This changes the playbook we use on upgrade so that it migrates the cluster in containers in addition to taking over the cluster. Change-Id: I353c219832c41328f298fa7b65768ecf26c37f29 (cherry picked from commit cab266c9b2b62c0033f8fb66e8e61b7aa46b3e2b)
2017-08-30Merge "Support deploying OVN as container services" into stable/pikeJenkins2-0/+307
2017-08-28Support deploying OVN as container servicesNuman Siddique2-0/+307
This patch adds the support to containerize OVN services for the base profile. OVN db servers do not support active-active mode yet. It does support master-slave mode supported through pacemaker, which will be supported in a later patch. Presently the tripleo container framework doesn't allow to start a container in only controller 0 (or bootstrap node). OVN db servers and ovn-northd are started on all the controllers, but only the OVN db servers running in the boot strap controller are configured to listen on the tcp ports 6641 and 6642. OVN neutron mechanism driver and ovn-controller's use the ovn_dbs_vip to connect to the OVN db servers. Haproxy configures all the controllers as back ends, but only OVN db servers running on controller 0 respond since only they are configured properly. The OVN containers running on other controller nodes do not interact any way, but are wasteful resources. This patch also adds the scenario007-multinode-containers CI template. Partial-bug: #1699085 Change-Id: I98b85191cc1fd8c2b166924044d704e79a4c4c8a (cherry picked from commit e7cd03d2f0fcd8e3069246ced94f1a83869b8bea)
2017-08-25Containarise Barbican APIJanki Chhatbar1-0/+154
This containerises Barbican API in TripleO Change-Id: Icc5e9841ea48c806af4db61cd6de5e9a7a40a988 Partial-Bug: 1668924 Depends-On: I6b5ec18ccdd51b90ff27ff7d4341260dfba71e4e (cherry picked from commit 6d338b809accea4d3ba09ca8363b1a97ed79b658)
2017-08-24Merge "Remove baremetal cron jobs on docker upgrade"Jenkins4-0/+16
2017-08-24Merge "Docker: Enable TLS in the internal network for libvirt"Jenkins1-1/+16
2017-08-23Merge "More fixes for the Ceph docker images url parsing"Jenkins1-2/+2
2017-08-23Merge "docker: Stop all active ceilometer services during compute upgrade"Jenkins1-1/+16
2017-08-23Docker: Enable TLS in the internal network for libvirtJuan Antonio Osorio Robles1-1/+16
Bind mounts the necessary certs and keys to enable live migrations using TLS. bp tls-via-certmonger-containers Depends-On: I26a7748b37059ea37f460d8c70ef684cc41b16d3 Change-Id: I81efa85d916823f740bf320c88a248403743a45b
2017-08-22Merge "Zaqar: Match service name with service-net-map"Jenkins1-1/+1
2017-08-22Zaqar: Match service name with service-net-mapJuan Antonio Osorio Robles1-1/+1
This is required for t-h-t to generate the appropriate hieradata. Change-Id: I9b451eac4427a52ad8eec62ff89acc6c6d3ab799 Closes-Bug: #1712328
2017-08-22Fix configuration files path for logrotate containerMartin André1-1/+1
The config_volume is named 'crond', and so must me the path to puppet-generated directory. Change-Id: I13b4ad7642ddf3bc5d1f4aa979b4a91a89605fb1 Closes-Bug: #1712300
2017-08-21Merge "Add logrotate with crond service"Jenkins1-0/+84
2017-08-21Merge "Let mds create manila key and fs"Jenkins2-2/+2
2017-08-21TLS for containerized horizonJuan Antonio Osorio Robles1-0/+17
bind mount the certificates needed for TLS. bp tls-via-certmonger-containers Change-Id: Ib9b533249be37665b77396a76133cc42fd15ee2b
2017-08-21Merge "Enable TLS for containerized RabbitMQ"Jenkins1-0/+51
2017-08-21Add logrotate with crond serviceBogdan Dobrelya1-0/+84
Add a docker service template to provide containerized services logs rotation with a crond job. Add OS::TripleO::Services::LogrotateCrond to CI multinode-containers and to all environments among with generic services like Ntp or Kernel. Set it to OS::Heat::None for non containerized environments and only enable it to the environments/docker.yaml. Closes-bug: #1700912 Change-Id: Ic94373f0a0758e9959e1f896481780674437147d Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
2017-08-19Merge "Mount ceph config on gnocchi statsd"Jenkins1-0/+9
2017-08-19Merge "Swith to the appropriate ceph-ansible playbook on upgrade"Jenkins1-1/+19
2017-08-19Merge "Convert scenario001-multinode-containers job to ceph-ansible"Jenkins1-0/+5
2017-08-19Merge "Add params needed for the ceph-ansible switch to containers playbook"Jenkins1-0/+1
2017-08-19Merge "Tag the ha containers with 'pcmklatest' at deploy time"Jenkins7-18/+221
2017-08-18Mount ceph config on gnocchi statsdPradeep Kilambi1-0/+9
gnocchi-statsd needs access to ceph config. lets mount the ceph config files so it doesnt throw conf_read_file errors. Change-Id: I1426d580c8d8d60e986ca859f89eeb8799ab6bd2
2017-08-18More fixes for the Ceph docker images url parsingGiulio Fidente1-2/+2
Existing code was still failing the following scenario: http://192.168.24.1:8787/ceph/rhceph-2-rhel7:latest Now this has been tested with the following variations: http://192.168.24.1:8787/ceph/rhceph-2-rhel7:latest http://192.168.24.1:8787/rhceph-2-rhel7:latest 192.168.24.1:8787/ceph/rhceph-2-rhel7:latest 192.168.24.1:8787/rhceph-2-rhel7:latest 192.168.24.1/ceph/daemon:latest And then the same list without the custom registry host. Change-Id: Ifc871de8c2678f6a6fc5d234bfb62e8273c1b0b7
2017-08-18Merge "Restore and split nova metadata docker service out of nova-api."Jenkins1-5/+61
2017-08-18Let mds create manila key and fsJan Provaznik2-2/+2
ceph-ansible will take care of setting up client keys both in ceph and on client side. It will also create filesystem for manila. To assure that manila manifest can work in future both with puppet and with ceph-ansible, creation of filesystem is moved to ceph-mds manifest and creation of manila key on ceph side is moved to ceph-base (so manila key is always created), manila key is added to ceph-external for external ceph deployments. Key creation is removed from manila.pp in patch I2b5567a39ac8737e80758b705818cc1807dc8bf1 Change-Id: I6308a317ffe0af244396aba5197c85e273e69f68 Related-To: Ia3ef9e9a2b159dacea01e38762145ff2bcc7ba27 Depends-On: I3f18bbe476c4f43fa4e162cc66c5df443122cd0c
2017-08-18Tag the ha containers with 'pcmklatest' at deploy timeMichele Baldessari7-18/+221
We need to tag the HA containers with a special tag so that the RA definition never changes. We do this step in THT as opposed to puppet because we need to guarantee that all images are tagged on all nodes *before* step 2 where the bundle gets created. NB: Getting the image name without the tag will require some more yaql work to get all the cases right. Right now this works only if we enforce that the image has a ':tag' at the end of the name. So far this is always the case. If things change we will need to amend this code. Co-Authored-By: Damien Ciabrini <dciabrin@redhat.com> Co-Authored-By: Sofer Athlan-Guyot <sathlang@redhat.com> Change-Id: I362e6cf26fba77d3f949b7d2fc4b35a3eab9087e
2017-08-18Enable TLS for containerized RabbitMQJuan Antonio Osorio Robles1-0/+51
Bind mounts and adds the appropriate permissions for the cert and key that's used for TLS. bp tls-via-certmonger-containers Depends-On: I62ff89362cfcc80e6e62fad09110918c36802813 Change-Id: I48325893a00690e2f5d6f1d685f903234545d5b8
2017-08-18Convert scenario001-multinode-containers job to ceph-ansibleGiulio Fidente1-0/+5
Updates ci/environments/scenario001-multinode-containers.yaml to use ceph-ansible instead of puppet-ceph. Change-Id: Idbd02a3c7404daecdc6e2c45ea6d3478bf70552c Depends-On: Ifa4937624ed14a3ece48dd92ba4f69b5e4928e77
2017-08-18Merge "Refactor setup_docker_host.sh as host_prep_tasks"Jenkins1-0/+13
2017-08-18Merge "Containerize Manila Share for HA"Jenkins1-0/+142
2017-08-18Merge "Add support for installing Ceph MDS via ceph-ansible"Jenkins2-0/+101
2017-08-18Restore and split nova metadata docker service out of nova-api.Oliver Walsh1-5/+61
I2c39a2957fd95dd261b5b8c4df5e66e00a68d2f7 changed nova api to http from eventlet, however we need to continue running the eventlet service as it is required for the nova metadata api. However this should be tied to the OS::TripleO::Services::NovaMetadata service, so duplicate the required config in nova-metadata.yaml. Change-Id: I398575d565d5527bcaa1c8b33b9de2e1e0f2f6fd Depends-On: Id3407e151566d16c6ae1e1ea8c1b021dac22e727 Closes-bug: #1711425
2017-08-17Merge "Mount NFS volume to docker container."Jenkins1-0/+11
2017-08-17Merge "Enable TLS configuration for containerized RabbitMQ"Jenkins1-0/+15
2017-08-17Merge "Enable TLS for containerized MySQL"Jenkins1-9/+60
2017-08-17Merge "Enable TLS for containerized haproxy"Jenkins1-8/+57
2017-08-17Merge "Enable TLS configuration for containerized HAProxy"Jenkins1-5/+52
2017-08-17Refactor setup_docker_host.sh as host_prep_tasksJiri Stransky1-0/+13
Previously what we've been doing with setup_docker_host.sh can now be achieved with host_prep_tasks, and we can free up the NodeUserData interface for other use cases. Closes-Bug: #1711387 Change-Id: Iaac90efd03e37ceb02c312f9c15c1da7d4982510
2017-08-17Swith to the appropriate ceph-ansible playbook on upgradeGiulio Fidente1-1/+19
When performing an overcloud upgrade, we need to run a different ceph-ansible playbook from what we run for fresh deployments. This change adds the logic to parse StackUpdateType and set the playbook path accordingly. Change-Id: I2882f62a80954e6e7324bb86e5ac91c059698a60
2017-08-17Merge "Containerize virtlogd"Jenkins1-21/+32
2017-08-16Containerize Manila Share for HAVictoria Martinez de la Cruz1-0/+142
This service allows configuring and deploying manila-share containers in a HA overcloud managed by pacemaker. The containers are managed and run by pacemaker. Pacemaker runs the standard Kolla image but overrides the initial command so that it explicitely calls manila-share. This way, we shield ourselves from any unexpected future change in Kolla. This container needs to use the 'docker_config' section to invoke puppet (as opposed to 'docker_puppet_tasks'), because due to the HA composability each resource creation needs to happen on the bootstrap node of that service and 'docker_puppet_tasks' will only run on the controller/primary role. Based on work done in fdb233e64e3d78014dd7e351abfed5aec5035866 Partial-Bug: #1668922 Change-Id: Ifa94c506db5eb667690a19d594115a93d2a790b2 Depends-On: I797eea2f7788f65411964ccb852b5707e916416f