Age | Commit message (Collapse) | Author | Files | Lines |
|
In case of an OSP upgrade, some of the roles may require
the reconfiguration of network via os-net-config, especially
with roles having DPDK nics. In order to facilitate this
configuration per role, the THT parameter
'NetworkDeploymentActions' is made role specific.
Change-Id: I17a1812cf9e1c60fb893bf36dc99ab3ec5fc7250
(cherry picked from commit 88711c3b800257f6b333157eb3dfc8f4e7003a46)
|
|
|
|
All of the other SSL environments were converted, but this one was
missed. That's an inconsistent user experience and should be
cleaned up.
This environment also exposed a bug in the tool where it did not
include the parameter_defaults section key if all the parameters
were marked static.
Change-Id: I19bc422c22b9f60f781e696ce703b026dc317786
Closes-Bug: 1713761
(cherry picked from commit 7c06db3d1c384773c4abccbce450c259f75e5e4a)
|
|
These were missed in the previous refactor in role.role.j2.yaml,
we shouldn't reference these via hard-coded values or they become
mandatory in the roles_data.yaml
Change-Id: I014e7d6679c5733b17243d647eaad228c276585a
Closes-Bug: #1711656
(cherry picked from commit 4a4f6783081d9c5b74cda5149bef7655102fcfd8)
|
|
|
|
stable/pike
|
|
This change renders the IPv6 versions of the isolated
networks using j2. To allow for backward compatibility,
there will be 2 versions of the network definitions,
<network>.yaml and <network>_v6.yaml. If the ip_subnet
contains an IPv6 address, or if ipv6: true is set on the
network definition in network_data.yaml, then the
<network>.yaml version will contain an IPv6 definition,
otherwise the <network>.yaml will be IPv4, and the
<network>_v6.yaml will be IPv6.
In a future follow-up patch, we will probably only
create the required versions of the networks, either
IPv4, IPv6, not both.
The ipv6_subnet, ipv6_allocation_pools, and ipv6_gateway
settings in the network_data.yaml definition file are
used for the <network>_v6.yaml network definition.
Note that these subnet/cidr/gateway definitions only set
the defaults, which can be overridden with parameters
set in an environment file.
Since the parameters for IP and subnet range are the
same (e.g. InternalApiNetCidr applies to both IPv4/v6),
only one version can be used at a time. If an operator
wishes to use dual-stack IPv4/IPv6, then two different
networks should be created, and both networks can be
applied to a single interface.
Note that the workflow for the operator is the same as
before this change, but a new example template has been
added to environments/network-environment-v6.yaml.
Change-Id: I0e674e4b1e43786717ae6416571dde3a0e11a5cc
Partially-Implements: blueprint composable-networks
Closes-bug: 1714115
(cherry picked from commit dd299f08bd6b1df43760148d83ce9b6e09ba6572)
|
|
|
|
|
|
stable/pike
|
|
A storage backend has to be selected when deploying manila,
otherwise the manila-share service will fail to start. For this,
we have some environment files specifying the configuration for
different storage backends. We need a dockerized version
for this environment files.
In this patch set we add those environment files.
Change-Id: I9886016b02bec26699af1f8165d7b0702dfe8b9b
Partial-Bug: #1668922
(cherry picked from commit d7d54594410f60ea6ebf1301048d95f64c66f645)
|
|
stable/pike
|
|
stable/pike
|
|
These were edited manually and the input file was not updated, which
is causing problems when trying to generate new/updated envs.
Change-Id: Ia2e53e52361e35d94e2dedf9b8885498693bc2e0
Partial-Bug: 1713761
(cherry picked from commit 406b1982ba530abdd6c629780130851e8e335ae8)
|
|
For bug 1708115 and the O..P upgrade, and for the upgrade of
'non-controlers' we are now generating ansible playbooks from
collected service upgrade_tasks and these are executed instead
of the legacy tripleo_upgrade_node.sh.
To clarify, by 'non-controllers' it is meant any node for which
the corresponding roles_data.yaml role has the
disable_upgrade_deployment flag set True.
As a first pass, I am removing the workarounds from the script but
keeping its delivery mechanism for now in case it is needed still.
We can either update here to remove it or keep it until next cycle
The most important part for now is that we no longer 'manually'
run puppet here. Instead the post_deploy_steps are also collected
into a playbook and will be executed after the upgrade_tasks
(see the bug for discussion of the mechanism and related reviews)
Change-Id: Ib017b0ab435ca9558cf8659d434489cdf01df955
Related-Bug: 1708115
(cherry picked from commit 4c5b9c5c967105536106fa4a7e1ec2352b14b08c)
|
|
Depending on the version of mariadb/galera installed the mysql_bootstrap
command might fail. With the following unrevealing error:
openstack-mariadb-docker:2017-08-28.10 "bash -ec 'if [ -e /v" 3 hours ago Exited (124) 3 hours ago
The timeout is actually due to the fact that the following snippets does
not complete within 60 seconds:
"""
if [ -e /var/lib/mysql/mysql ]; then exit 0; fi
kolla_start
mysqld_safe --skip-networking --wsrep-on=OFF &
timeout ${DB_MAX_TIMEOUT} /bin/bash -c ''until mysqladmin -uroot -p"${DB_ROOT_PASSWORD}" ping 2>/dev/null; do sleep 1; done''
mysql -uroot -p"${DB_ROOT_PASSWORD}" -e "CREATE USER ''clustercheck''@''localhost'' IDENTIFIED BY '${DB_CLUSTERCHECK_PASSWORD}'';"
mysql -uroot -p"${DB_ROOT_PASSWORD}" -e "GRANT PROCESS ON *.* TO ''clustercheck'
"""
The problem is that with older mariadb versions:
galera-25.3.16-3.el7ost.x86_64
mariadb-5.5.56-2.el7.x86_64
The mysqld_safe process starts in galera mode (as opposed as to single
local mode):
170830 17:03:05 [Note] WSREP: Start replication
170830 17:03:05 [Note] WSREP: GMCast version 0
...
170830 17:03:05 [ERROR] WSREP: wsrep::connect() failed: 7
170830 17:03:05 [ERROR] Aborting
That means that even though we specified --wsrep-on=OFF it is still
starting in cluster mode. Let's add the extra --wsrep-provider=none
which older versions required.
Let's also add a '-x' to this transient container as that
would have helped a bit because we would have understood right away
that it was mysqld_safe that was not starting. I tested this
successfully on an environment that showed the problem. The new
option is still accepted by newer DB versions in any case.
Closes-Bug: #1714057
Change-Id: Icf67fd2fbf520e8a62405b4d49e8d5169ff3925b
Co-Authored-By: Mike Bayer <mbayer@redhat.com>
(cherry picked from commit c19968ca852ab608513fe692aab958af25276220)
|
|
- Set gnocchi archivepolicy in scenario001 job to high
- Set polling interval to 15 seconds instead of 300
Change-Id: Ie12abe1f03d000824c5fb1a46d74b94ce49d7876
(cherry picked from commit 0855d4c7b12d27721044ab09ca0d6e8f188d2e90)
|
|
This patch removes hard-coded reference to ODL related images.
Logic is implemented in TripleO-common to render images
based on the environment file specified.
Change-Id: I9a11072f98e1245dc32d27d0b0e9bc6e9e19399f
Partial-Bug: #1713685
(cherry picked from commit 21a6b66c8bb5377bc1391e3f582467de7f7b5562)
|
|
The changes in puppet/role.role.j2.yaml should have been made
to overcloud.j2.yaml, because we don't want the hard-coded reference
to the deprecated name in the parent template. Note we need to
pass this value from the parent template so the %index% substitution
works, which is required for predictable placement via *SchedulerHints
Partial-Bug: #1711656
Change-Id: Ided1802daac48d737f53caa7093df814ba101dd0
(cherry picked from commit c6207379db07544240b699ba000537b58d9fb68f)
|
|
|
|
|
|
|
|
|
|
ceph-ansible" into stable/pike
|
|
|
|
|
|
|
|
|
|
|
|
This change adds support for manila::backend::dellemc_vnx
Change-Id: I5fa5c2d6956429d1b9c12a5af6d4a887ed0624d9
Implements: blueprint dellemc-vnx-manila
(cherry picked from commit a3debcfa8b2cbb3acaba292e082b0a3b0ee8ef54)
|
|
This change adds support for manila::backend::dellemc_unity
Change-Id: Idec67d190b12359e8e6f1c157577088fa84ef41d
Implements: blueprint dellemc-unity-manila
(cherry picked from commit c5ee7b7714c712807f33ca1645186d33103a2264)
|
|
|
|
|
|
Since the event pipeline publisher defaults in the heat templates are
different from what puppet sets. We need to have the Manage to true so
override takes effect. Without this we keep defaulting back to puppet
defaults. We can flip this back to false once panko:// is droppped as
a supported option from the pipeline.
Change-Id: I2248c165783dddfb4cb7cf5644884dd8f6e6ed63
(cherry picked from commit 941b5d6797ea54afbc7b822ee045ce1186627e7c)
|
|
Currently the container neutron-ovs-agent is stuck in a restart loop
in many environments because the bridge br-ex is missing.
This bridge is created by running the puppet class
neutron::agents::ml2::ovs but limiting that run to tag
neutron::plugins::ovs::bridge.
The hiera neutron::agents::ml2::ovs::bridge_mappings should already
exists to create the bridge with the required settings.
This change should ensure br-ex exists after step 3.
Since br-ex is created regardless of the chosen network config,
environments/docker-network.yaml is not longer required. It can be
deleted once there are no more references to it in CI and
documentation.
Change-Id: Ie425148b0ad0f38e149c5fa0a97d98ec35d0a5bb
Closes-Bug: #1699261
Closes-Bug: #1691403
Closes-Bug: #1689556
(cherry picked from commit 76f130d6e8f7434433b2602af9794f1e9c742e1f)
|
|
Pacemaker puppet module takes care of mounting /etc/ceph into
manila-share container (I23b6890b4cf7f1e6fe84b6be280dde82218275fc).
Change-Id: I1026b2436275b17cfe3ac85192d42c5268f0a630
Related-To: I23b6890b4cf7f1e6fe84b6be280dde82218275fc
(cherry picked from commit 0d8040ca33d42dbb7e06162f2b659ff6cbc0316f)
|
|
On upgrade we need to run a specific playbook for ceph-ansible
to be able to take over the pre-existing Ceph cluster deployed with
puppet-ceph and the migrate it into a containerized deployment.
This changes the playbook we use on upgrade so that it migrates
the cluster in containers in addition to taking over the cluster.
Change-Id: I353c219832c41328f298fa7b65768ecf26c37f29
(cherry picked from commit cab266c9b2b62c0033f8fb66e8e61b7aa46b3e2b)
|
|
They should be integers as specified in the parameter definition
of the class. Else it'll fail.
Change-Id: I06b6e46c0722516e28e8bff4d481fb4b7a08bd61
Closes-Bug: #1713659
(cherry picked from commit 4bea8cf918463c43c7d5f4e46984ab54271ea3e5)
|
|
The example composable roles are missing the docker service declaration
so they currently do not work when trying to deploy with containerized
services.
Change-Id: I986ae561b950e74aacea10bce84673e8d0c9bd97
Closes-Bug: #1713755
(cherry picked from commit 50c975d1590930e6ce453942f99759a25ec08703)
|
|
Leave the version fields blank, since the release notes document
applies to all versions.
That will avoid manual changes in the future like we did until now.
Change-Id: Ibb33ade808c9866b5314b7dda60a44000089a467
(cherry picked from commit 4782394044a8f66de63db7772b7a5992a781cc57)
|
|
|
|
|
|
docker-puppet.py is very aggressive about running concurrently.
It uses python multiprocessing to run multiple config generating
containers at once. This seems to work well in general, but
in some cases... perhaps when the registry is slow or under
heavy load can cause timeouts to occur. Lately I'm seeing
several 'container did not start before the specified timeout'
errors that always seem to occur when config files are generated
(docker-puppet.py is initially executed.
A couple of things:
-when config files are generated this is the first time
most of the containers are pulled to each host machine
during deployment
-docker-puppet.py runs many of these processes at once. Some
of them run faster, other not.
-docker daemon's pull limit defaults to 3. This would throttle
the above a bit perhaps contributing the the likelyhood of a timeout.
One solution that seems to work for me is to set the PROCESS_COUNT
in docker-puppet.py to 3. As this matches docker daemon's default
it is probably safer at the cost of being slightly slower in some
cases.
Change-Id: I17feb3abd9d36fe7c95865a064502ce9902a074e
Closes-bug: #1713188
(cherry picked from commit 949d367ddeb42eff913cdbed733ccf6239b4864b)
|
|
Change-Id: Idf627a348cad8d5287c82cb393367210f1c760cf
Closes-bug: #1713185
(cherry picked from commit 20e1f0e8c9a2bbc3734f6eec0ee9ac2d5156f166)
|
|
when running manila-share under control of pacemaker, as
is done for cinder-volume service in the same circumstance.
Change-Id: Ic97f01913bae2a388c962a38fa175eb1d763cdcb
Depends-On: Ie31f2d5ccf458f5fcfe8bec5f2c37f45070cfde2
Closes-Bug: #1712842
(cherry picked from commit 8fa6c6e58c7ac0d32bf2f0dfb586683cf006e3bf)
|
|
|
|
stable/pike
|
|
|
|
This patch adds the support to containerize OVN services for the
base profile.
OVN db servers do not support active-active mode yet. It does support
master-slave mode supported through pacemaker, which will be supported
in a later patch.
Presently the tripleo container framework doesn't allow to start a
container in only controller 0 (or bootstrap node). OVN db servers and
ovn-northd are started on all the controllers, but only the OVN db
servers running in the boot strap controller are configured to listen
on the tcp ports 6641 and 6642. OVN neutron mechanism driver
and ovn-controller's use the ovn_dbs_vip to connect to the OVN db servers.
Haproxy configures all the controllers as back ends, but only OVN db
servers running on controller 0 respond since only they are configured
properly.
The OVN containers running on other controller nodes do not interact
any way, but are wasteful resources.
This patch also adds the scenario007-multinode-containers CI template.
Partial-bug: #1699085
Change-Id: I98b85191cc1fd8c2b166924044d704e79a4c4c8a
(cherry picked from commit e7cd03d2f0fcd8e3069246ced94f1a83869b8bea)
|
|
Change-Id: I603ce6922130fe32aa1a154df8146ee582bf1a45
(cherry picked from commit b1d7887ce710a98f061100e2878a54c06a5d09e2)
|