Age | Commit message (Collapse) | Author | Files | Lines |
|
This change removes the entry to containerise docker by default
because it should now be disabled since the change
Id2e6550fb7c319fc52469644ea022cf35757e0ce.
Removing the entry means the default mapping to mongodb-disabled.yaml
takes effect.
This change also modifies the upgrade_tasks so that the mongod service
is only disabled when the service exists. There appears to be upgrade
scenarios which fail because mongodb was never installed in the first
place.
Change-Id: Ie09ce2a52128eef157e4d768c1c4776fc49f2324
Closes-Bug: #1715031
(cherry picked from commit cb81cbe3b5f3887f5d690c590e52b728f74d43c3)
|
|
|
|
|
|
Currently for non controller upgrades we're looping through the
upgrade steps and run the upgrade tasks based on when conditionals
including the step number and the existing upgrade task condition.
Some of tasks fail because the variables used in when conditionals
are not available through all steps. This change adds default values
to these vars where possible or creates them for all steps to avoid
failures.
Related-Bug: 1708115
Change-Id: I5c731043cec8e31fc82ca98972a301baa7294c4f
(cherry picked from commit e2f00ef1dc98140087c81e202a520f549f9a0970)
|
|
This change adds support for manila::backend::dellemc_vmax
Change-Id: I92e189c8741c496ef6c27130f73829c327a99f1b
Implements: blueprint dellemc-vmax-manila
(cherry picked from commit 04daabdc8414e4435dc4cd3ccfea9a62b5631261)
|
|
|
|
It was being set using NeutronAdmin endpoint but it is an
authorization url. Set it using KeystoneInternal endpoint.
Change-Id: I23f4a895628ac909a1fe1f93cecefa84f25858b1
Closes-Bug: #1712908
(cherry picked from commit 7380183cf590b74f5ad84bb40a8afa08979c235b)
|
|
See full context on https://bugs.launchpad.net/bugs/1713612
but this service isn't containerized yet, so the plan is:
- in Pike, we'll run scenario004 (baremetal) and test bgp-vpn and l2gw
- in Queens, we'll run scenario004 (baremetal at the beginning) but
scenario004-container will be the default and we'll re-add the 2
services when containerized.
Change-Id: I04c2a9fb63420b7d8d3616a8ef7a50d2aadc6165
(cherry picked from commit fde4ff2c64f374e109dbb7da87cc7d72da5e0ef5)
|
|
Change-Id: Iefc0d04b19953ece60cf5c886258ed794e5c795d
Depends-On: Iba97c0a6a4b4b0529c6434d58275a3d362b74947
Related-Bug: #1712070
(cherry picked from commit 02cd34d148d6abf11cc64852f7931cbd4bccf767)
|
|
This service is necessary when we containerized TripleO with
Pacemaker.
The service is added also to non-containerized scenario lists, because
the aim is to get rid of the -containers.yaml variants eventually.
This shouldn't affect any jobs that don't include docker-ha.yaml. The
resource registry entry is mapped to OS::Heat::None by default, and
docker-ha.yaml maps it to actual containerized clustercheck.
Change-Id: I342e29de52cb6ce069a05a2dbfb0501a2da200e6
Partial-Bug: #1712070
(cherry picked from commit 5b805cb37eec3097552314c6ce43c02c2a604d81)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Get the path from the CONFIG_VOLUME_PREFIX environment variable.
This is useful for debugging and generate configuration files to
a different directory.
Change-Id: Ib85e3898804312ebb6677a5fa189fbfc357ce27c
(cherry picked from commit 0c62b6cd8d696befb1c0c31bb6e206199ce1edac)
|
|
Correct the zaqar service name to match the bootstrap host id name
Closes-bug: #1714253
Change-Id: Iced8f3a7e64d9023bd46a50629a56e087d1f6f24
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
(cherry picked from commit d782f687cb7794e0491c0d0f6dc3d9b28196dc96)
|
|
|
|
This change adds a new define for cinder::backend::dellemc_vmax_iscsi
Change-Id: I7c685e0a3186da138964f17b487fb0c3533f58c7
Implements: blueprint dellemc-vmax-isci
(cherry picked from commit c77189905525c6fe834e001f2231b9eab788cd01)
|
|
Use a separate config_volume for swift_ringbuilder puppet_config tasks.
This is necessary so that the swift_ringbuilder and swift-storage
services don't both rsync files to the same bind mounted directory.
The rsync command from docker-puppet.py uses --delete-after, so when
they both use the same config_volume, they can end up deleting the files
generated by the other (depending on the order of execution).
Even though a separate config_volume is used, the rings must still end up
in /etc/swift for the swift services containers. An additional
container init task is used to copy the ring files into
/var/lib/config-data/puppet-generated/swift/etc/swift so that they will
be present when the actual swift services containers are started.
Change-Id: I05821e76191f64212704ca8e3b7428cda6b3a4b7
Closes-Bug: #1710952
(cherry picked from commit cba00abb7517efa6a8d9b8fb954563204323ffed)
|
|
Change-Id: Id7d5967370a5d3fa0183359349f502f32a0109da
(cherry picked from commit e1b1b5654d70c4a38be340070648d0fb7932bcc8)
|
|
The docker _cron services show up as (unhealthy) due to
them sharing the containers for the OpenStack services.
As such we need to manually override the health checks
for these services. By setting them to /bin/true
the services should show up has healthy.
Change-Id: I46e12bcec226fbe2768c7fe8f0e7719df46401a9
Closes-bug: #1713183
(cherry picked from commit d1aaf0aadf487ccfcdecb47f3cfbf6087401242b)
|
|
Where applicable, use list_concat instead of yaql to build new lists: it
should be more resilient to errors, easier to debug, and less expensive.
Change-Id: I6d3dbc7ee8eac50f46023a35af4ec7f2d378fd87
Related-Bug: #1714005
(cherry picked from commit 8008089de24437757d3ba10299bb1041b4aa627a)
|
|
In case of an OSP upgrade, some of the roles may require
the reconfiguration of network via os-net-config, especially
with roles having DPDK nics. In order to facilitate this
configuration per role, the THT parameter
'NetworkDeploymentActions' is made role specific.
Change-Id: I17a1812cf9e1c60fb893bf36dc99ab3ec5fc7250
(cherry picked from commit 88711c3b800257f6b333157eb3dfc8f4e7003a46)
|
|
|
|
All of the other SSL environments were converted, but this one was
missed. That's an inconsistent user experience and should be
cleaned up.
This environment also exposed a bug in the tool where it did not
include the parameter_defaults section key if all the parameters
were marked static.
Change-Id: I19bc422c22b9f60f781e696ce703b026dc317786
Closes-Bug: 1713761
(cherry picked from commit 7c06db3d1c384773c4abccbce450c259f75e5e4a)
|
|
These were missed in the previous refactor in role.role.j2.yaml,
we shouldn't reference these via hard-coded values or they become
mandatory in the roles_data.yaml
Change-Id: I014e7d6679c5733b17243d647eaad228c276585a
Closes-Bug: #1711656
(cherry picked from commit 4a4f6783081d9c5b74cda5149bef7655102fcfd8)
|
|
|
|
stable/pike
|
|
The containerized implementation of tacker is incomplete in THT,
and relies on the pre-pike single "tacker" container. Container
builds using the final pike release of kolla build three tacker
containers to have seperate conductor and server containers.
According to this bug[1], tacker does not even work without this
conductor. Our scenario job needs to be updated to actually test
tacker is working.
This will need to be backported to pike, and we can work on
better supporting tacker in containers in queens.
[1] https://bugs.launchpad.net/tripleo/+bug/1710874
Change-Id: I7cab33687a05bf6ba5c6fb70ba21f3250d3ef381
Partial-Bug: 1714270
|
|
This change renders the IPv6 versions of the isolated
networks using j2. To allow for backward compatibility,
there will be 2 versions of the network definitions,
<network>.yaml and <network>_v6.yaml. If the ip_subnet
contains an IPv6 address, or if ipv6: true is set on the
network definition in network_data.yaml, then the
<network>.yaml version will contain an IPv6 definition,
otherwise the <network>.yaml will be IPv4, and the
<network>_v6.yaml will be IPv6.
In a future follow-up patch, we will probably only
create the required versions of the networks, either
IPv4, IPv6, not both.
The ipv6_subnet, ipv6_allocation_pools, and ipv6_gateway
settings in the network_data.yaml definition file are
used for the <network>_v6.yaml network definition.
Note that these subnet/cidr/gateway definitions only set
the defaults, which can be overridden with parameters
set in an environment file.
Since the parameters for IP and subnet range are the
same (e.g. InternalApiNetCidr applies to both IPv4/v6),
only one version can be used at a time. If an operator
wishes to use dual-stack IPv4/IPv6, then two different
networks should be created, and both networks can be
applied to a single interface.
Note that the workflow for the operator is the same as
before this change, but a new example template has been
added to environments/network-environment-v6.yaml.
Change-Id: I0e674e4b1e43786717ae6416571dde3a0e11a5cc
Partially-Implements: blueprint composable-networks
Closes-bug: 1714115
(cherry picked from commit dd299f08bd6b1df43760148d83ce9b6e09ba6572)
|
|
Change-Id: Ie3f8798c2c3f967ffc867b1a55abab13f9f042a1
|
|
|
|
|
|
stable/pike
|
|
A storage backend has to be selected when deploying manila,
otherwise the manila-share service will fail to start. For this,
we have some environment files specifying the configuration for
different storage backends. We need a dockerized version
for this environment files.
In this patch set we add those environment files.
Change-Id: I9886016b02bec26699af1f8165d7b0702dfe8b9b
Partial-Bug: #1668922
(cherry picked from commit d7d54594410f60ea6ebf1301048d95f64c66f645)
|
|
stable/pike
|
|
stable/pike
|
|
These were edited manually and the input file was not updated, which
is causing problems when trying to generate new/updated envs.
Change-Id: Ia2e53e52361e35d94e2dedf9b8885498693bc2e0
Partial-Bug: 1713761
(cherry picked from commit 406b1982ba530abdd6c629780130851e8e335ae8)
|
|
For bug 1708115 and the O..P upgrade, and for the upgrade of
'non-controlers' we are now generating ansible playbooks from
collected service upgrade_tasks and these are executed instead
of the legacy tripleo_upgrade_node.sh.
To clarify, by 'non-controllers' it is meant any node for which
the corresponding roles_data.yaml role has the
disable_upgrade_deployment flag set True.
As a first pass, I am removing the workarounds from the script but
keeping its delivery mechanism for now in case it is needed still.
We can either update here to remove it or keep it until next cycle
The most important part for now is that we no longer 'manually'
run puppet here. Instead the post_deploy_steps are also collected
into a playbook and will be executed after the upgrade_tasks
(see the bug for discussion of the mechanism and related reviews)
Change-Id: Ib017b0ab435ca9558cf8659d434489cdf01df955
Related-Bug: 1708115
(cherry picked from commit 4c5b9c5c967105536106fa4a7e1ec2352b14b08c)
|
|
Depending on the version of mariadb/galera installed the mysql_bootstrap
command might fail. With the following unrevealing error:
openstack-mariadb-docker:2017-08-28.10 "bash -ec 'if [ -e /v" 3 hours ago Exited (124) 3 hours ago
The timeout is actually due to the fact that the following snippets does
not complete within 60 seconds:
"""
if [ -e /var/lib/mysql/mysql ]; then exit 0; fi
kolla_start
mysqld_safe --skip-networking --wsrep-on=OFF &
timeout ${DB_MAX_TIMEOUT} /bin/bash -c ''until mysqladmin -uroot -p"${DB_ROOT_PASSWORD}" ping 2>/dev/null; do sleep 1; done''
mysql -uroot -p"${DB_ROOT_PASSWORD}" -e "CREATE USER ''clustercheck''@''localhost'' IDENTIFIED BY '${DB_CLUSTERCHECK_PASSWORD}'';"
mysql -uroot -p"${DB_ROOT_PASSWORD}" -e "GRANT PROCESS ON *.* TO ''clustercheck'
"""
The problem is that with older mariadb versions:
galera-25.3.16-3.el7ost.x86_64
mariadb-5.5.56-2.el7.x86_64
The mysqld_safe process starts in galera mode (as opposed as to single
local mode):
170830 17:03:05 [Note] WSREP: Start replication
170830 17:03:05 [Note] WSREP: GMCast version 0
...
170830 17:03:05 [ERROR] WSREP: wsrep::connect() failed: 7
170830 17:03:05 [ERROR] Aborting
That means that even though we specified --wsrep-on=OFF it is still
starting in cluster mode. Let's add the extra --wsrep-provider=none
which older versions required.
Let's also add a '-x' to this transient container as that
would have helped a bit because we would have understood right away
that it was mysqld_safe that was not starting. I tested this
successfully on an environment that showed the problem. The new
option is still accepted by newer DB versions in any case.
Closes-Bug: #1714057
Change-Id: Icf67fd2fbf520e8a62405b4d49e8d5169ff3925b
Co-Authored-By: Mike Bayer <mbayer@redhat.com>
(cherry picked from commit c19968ca852ab608513fe692aab958af25276220)
|
|
- Set gnocchi archivepolicy in scenario001 job to high
- Set polling interval to 15 seconds instead of 300
Change-Id: Ie12abe1f03d000824c5fb1a46d74b94ce49d7876
(cherry picked from commit 0855d4c7b12d27721044ab09ca0d6e8f188d2e90)
|
|
This patch removes hard-coded reference to ODL related images.
Logic is implemented in TripleO-common to render images
based on the environment file specified.
Change-Id: I9a11072f98e1245dc32d27d0b0e9bc6e9e19399f
Partial-Bug: #1713685
(cherry picked from commit 21a6b66c8bb5377bc1391e3f582467de7f7b5562)
|
|
The changes in puppet/role.role.j2.yaml should have been made
to overcloud.j2.yaml, because we don't want the hard-coded reference
to the deprecated name in the parent template. Note we need to
pass this value from the parent template so the %index% substitution
works, which is required for predictable placement via *SchedulerHints
Partial-Bug: #1711656
Change-Id: Ided1802daac48d737f53caa7093df814ba101dd0
(cherry picked from commit c6207379db07544240b699ba000537b58d9fb68f)
|
|
|
|
|
|
|