Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
Adds this into the tripleo_upgrade_node.sh executed by the
operator for the major upgrade see the bug for more info
Change-Id: Ic54b48b149594e8ea08e95152111bcdaf7b252b7
Closes-Bug: 1707926
|
|
The cron containers need to run as root in order to create PID files
correctly.
Additionally, the keystone_cron container was misconfigured to
use /usr/bin/cron instead of the correct /usr/bin/crond.
Additionally we have an issue where the Kolla keystone container has
hard coded ARGS for the docker container which causes -DFOREGROUND
(an Apache specific argument) to get appended onto the kolla_start
command thus causing crond to fail to startup correctly. This
works around the issue by overriding the command and calling
kolla_set_configs manually. Once we fix this in Kolla we can
revisit this.
Change-Id: Ib8fb2bef9a3bb89131265051e9ea304525b58374
Related-bug: 1707785
|
|
The syntax was wrong and wasn't actually bind mounting the CA file.
This fixes it.
Change-Id: Icfa2118ccd2a32fdc3d1af27e3e3ee02bdfbb13b
|
|
Some resources have changed. So the environment needed syncing
Change-Id: I9aa310ae80edfccd3ed28e67a431aad6e1ed8a7f
|
|
metadata_settings is meant to have a specific format or be completely
absent. Unfortunately the hook [1] doesn't an empty value for this. So
we remove it as an easy fix before figuring out how to add such a
functionality to the hook.
[1] https://github.com/openstack/tripleo-heat-templates/blob/master/extraconfig/nova_metadata/krb-service-principals.yaml
Co-Authored-By: Thomas Herve <therve@redhat.com>
Change-Id: Ieac62a8076e421b5c4843a3cbe1c8fa9e3825b38
|
|
|
|
In HA overclouds, the helper script clustercheck is called by HAProxy to poll
the state of the galera cluster. Make sure that a dedicated clustercheck user
is created at deployment, like it is currently done in Ocata.
The creation of the clustercheck user happens on all controller nodes, right
after the database creation. This way, it does not need to wait for the galera
cluster to be up and running.
Partial-Bug: #1707683
Change-Id: If8e0b3f9e4f317fde5328e71115aab87a5fa655f
|
|
|
|
These are needed for the TLS everywhere bits.
Change-Id: I81fcf453fc1aaa2545e0ed24013f0f13b240a102
|
|
|
|
That was missed back then. Without it bug 1697724 is not fixed for containers.
Change-Id: Ie859f10129cbdeebd9ea4522510768cec99a1df3
Related-Bug: #1697724
|
|
With OvS2.7, DPDK is initialized immediately after setting
dpdk-init flag. DPDK requires hugepages configuration to be
available on kernel args with a reboot. This patch reboots
the node after applying the kernel args. And once the node
is rebooted, DPDK will be enabled and then the deployment
continues.
Change-Id: Ide442e09c2bea56a38399247de588e63b4272326
|
|
|
|
|
|
|
|
Running puppet apply with --logdest syslog results in all the output
being redirected to syslog. You get no error messages. In the case where this fails, the subsequent debug task shows nothing useful
as there was no stdout/stderr.
Also pass --logdest console to docker-puppet's puppet apply so that
we get the output for the debug task.
Related-Bug: #1707030
Change-Id: I67df5eee9916237420ca646a16e188f26c828c0e
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
|
|
|
|
networking-odl no longer supports the network-topology port
binding controller and instead now relies on a pseudo-agent binding
controller. This means that each OVS node must be configured with
host configuration in OVSDB about which VIF types, network types,
functions, etc that this OVS node supports. The end result is this
affects where nova and neutron will schedule instances.
Changes Include:
- Modifying default port binding controller to use pseudo agent
- Adds necessary per role parameters to be able to configure host
config on a per role basis to allow for heterogenous compute node
configurations.
Change-Id: I50458abf6a8a6bf724ad97accb6444d9c497d287
Closes-Bug: 1674995
Signed-off-by: Tim Rozet <trozet@redhat.com>
|
|
Presently the ovn-controller service (puppet/services/neutron-compute-plugin-ovn.yaml)
is started only on compute nodes. But for the cases where the controller nodes
provide the north/south traffic, we need ovn-controller service runninng in controller
nodes as well.
This patch
- Renames the neutron-compute-plugin-ovn.yaml to ovn-controller.yaml which makes more
sense and sets the service name as 'ovn-controller'.
- Adds the service 'ovn-controller' to Controller and Compute roles.
- Adds the missing 'upgrade_tasks' section in ovn-dbs.yaml and ovn-controller.yaml
Depends-On: Ie3f09dc70a582f3d14de093043e232820f837bc3
Depends-On: Ide11569d81f5f28bafccc168b624be505174fc53
Change-Id: Ib7747406213d18fd65b86820c1f86ee7c39f7cf5
|
|
Running puppet apply with --logdest syslog results in all the output
being redirected to syslog. You get no error messages. In the case where
this ansible task fails, the subsequent debug task shows nothing useful
as there was no stdout/stderr.
Also pass --logdest console to puppet apply so that we get the output
for the debug task. My local testing showed that when specifying logdest
twice, both values were honored, and the output went to syslog and the
console.
Change-Id: Id5212b3ed27b6299e33e81ecf71ead554f9bdd29
Closes-Bug: #1707030
|
|
Services that access database have to read an extra MySQL configuration file
/etc/my.cnf.d/tripleo.cnf which holds client-only settings, like client bind
address and SSL configuration. The configuration file is thus used by
containerized services, but also by non-containerized services that still
run on the host.
In order to generate that client configuration file appropriately both on the
host and for containers, 1) the MySQLClient service must be included by the
role; 2) every containerized service which uses the database must include the
mysql::client profile in the docker-puppet config generation step.
By including the mysql::client profile in each containerized service, we ensure
that any change in configuration file will be reflected in the service's
/var/lib/config-data/{service}, and that paunch will restart the service's
container automatically.
We now only rely on MySQLClient from puppet/services, to make it possible to
generate /etc/my.cnf.d/tripleo.cnf on the host, and to set the hiera keys that
drive the generation of that config file in containers via docker-puppet.
We include a new YAML validation step to ensure that any service which depends
on MySQL will initialize the mysql::client profile during the docker-puppet
step.
Change-Id: I0dab1dc9caef1e749f1c42cfefeba179caebc8d7
|
|
The iscsid service definition has a typo, config_setting should
read config_settings
Change-Id: I12605dba61fd5f6ce80c3ab78e883ed5ebf3ca62
|
|
Just setting CloudDomain won't make the domains used consistent.
There are a number of CloudName parameters that must be set as well.
This change adds a sample environment that includes all of those
parameters so it is easy to set everything consistently.
Also fixes the description of CloudNameCtlplane to reflect the
actual use for that parameter.
Change-Id: I56d1c1c5619f83c16c4e8350aa84fccc3d748425
|
|
|
|
|
|
This sets the SSL flag in the docker service and expose the parameter in
the docker service.
Depends-On: I4c68a662c2433398249f770ac50ba0791449fe71
Change-Id: Ic3df2b9ab7432ffbed5434943e04085a781774a0
|
|
|
|
|
|
|
|
|
|
|
|
Add docker profiles to deploy Ceph in containers via ceph-ansible. This is
implemented by triggering a Mistral workflow during one of the overcloud
deployment steps, as provided by [1].
Some new service-specific parameters are available to determine the workflow to
execute and the ansible playbook to use. A new `CephAnsibleExtraConfig`
parameter can be used to provide arbitrary config variables consumed by `ceph-ansible`.
The pre-existing template params consumed up until the Pike release to
drive `puppet-ceph` continue to work and are translated, when possible, into
the equivalent `ceph-ansible` variable.
A new environment file is added to enable use of ceph-ansible;
the pre-existing puppet-ceph implementation remains unchanged and usable
for non-containerized deployments.
1. https://review.openstack.org/#/c/463324/
Change-Id: I81d44a1e198c83a4ef8b109b4eb6c611555dcdc5
|
|
|
|
Using the separate neutron-opendaylight and SRIOV env files do not work
because sriov includes using OVS agent (which ODL does not want or need)
and the default ODL env file has no Compute ML2 because it is not
needed. Thus a new environment file is needed for deploying these 2
features in combination.
Closes-Bug: 1696667
Change-Id: I6f7a9368aa521de928c269619278c30acda03799
Signed-off-by: Tim Rozet <trozet@redhat.com>
|
|
This change adds templates that are used to create network and
port definition templates for each network that is defined in
network_data.yaml. In order to render the templates, additional
fields have been added to the network_data.yaml file. If this
optional data is present, it will be used to populate the default
parameter values in the network template.
The only required parameters in the network_data.yaml file is
the network name. If the network will have IPv6 addresses, then
ipv6: true must be set on the network.
The existing networks have been modeled in the network_data.yaml,
but until these templates are removed from the j2_excludes.yaml
file they will not be generated on the fly. Any additional
networks will have templates generated.
This change also removes an unnecessary conditional from the
networks.j2.yaml file, since InternalApiNetwork doesn't need
to be reformatted as InternalNetwork (it's only used in this
one file).
A follow-up patch will remove the existing network definitions
so all networks are created dynamically.
Change-Id: If074f87494a46305c990a0ea332c7b576d3c6ed8
Depends-On: Iab8aca2f1fcaba0c8f109717a4b3068f629c9aab
Partially-Implements: blueprint composable-networks
|
|
|
|
|
|
The necessary resource registry entries were missing from this env
and the old environment was not deprecated.
Change-Id: I6a9b148514fc5da1f96b9fd7fe09f564c2f82419
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
The introduction of I90253412a5e2cd8e56e74cce3548064c06d022b1 broke the HAproxy
service due to some HAproxy-specific iptables rules being executed during the
puppet config step.
Ensure that no iptables call is performed during the generation of configuration
files. Move those calls to step 1, as implemented in the pacemaker-based
HAproxy service (Ib5a083ba3299a82645f1a0f9da0d482c6b89ee23).
Depends-On: I2d6274d061039a9793ad162ed8e750bd87bf71e9
Closes-Bug: #1697921
Change-Id: Ica3a432ff4a9e7a46df22cddba9ad96e1390b665
|