Age | Commit message (Collapse) | Author | Files | Lines |
|
We have disabled mongo by default in containers via:
Id2e6550fb7c319fc52469644ea022cf35757e0ce Disable mongodb by default
Ie09ce2a52128eef157e4d768c1c4776fc49f2324 Containerized mongodb, disable by default, fix upgrade
Let's not use it in scenario002 either.
NB: Not entirely clean cherry-pick due to scenario002-multinode-containers.yaml
having many more services in master than in pike.
Change-Id: I0d2df25ed797ffb8425ba81736526d3688e5de5c
Closes-Bug: #1724679
(cherry picked from commit 900416d9809bf4446c0c037128edb033ab9b3bcc)
|
|
stable/pike
|
|
|
|
|
|
|
|
We currently have the following in the registry:
OS::TripleO::Services::SwiftDispersion: puppet/services/swift-dispersion.yaml
Since this service is included by default in the Controller role
it will be installed on the host even on a containerized deployment.
Let's noop this in docker.yaml until a containerized version of it
gets merged.
Change-Id: Ic2793d0cfb7b20f4661cb1a45793cae67a4868b4
Closes-Bug: #1723788
(cherry picked from commit 0c8ba9651734a0e6180ca443c87c8c8ca5169d6c)
|
|
Closes-bug: #1722758
Change-Id: I0161c534807ca45e2d2b6fcace5fc3e26eb450a2
(cherry picked from commit 7e398bf18910e062415ce4e70236ce98577aed13)
|
|
Instead of using the key provided by user on the command line, create
a new short-lived key, give it to Mistral to create a tripleo-admin
user with it, and remove the short-lived key.
Co-Authored-By: John Fulton <fulton@redhat.com>
Change-Id: I6e6ed83fa62319d59d7289b16a1412a340ea6b26
Closes-Bug: #1724578
(cherry picked from commit b0e72c1413c9441aa592b56583e87715e7096152)
|
|
For deployed-server custom roles, the deprecation handlings are removed.
As these have always been custom roles with definitions generated from
role.role.j2.yaml, these original (now deprecated) param names were
never present for anyone using this deployed-server roles data file.
Specifically, deprecated_server_resource_name is quite troublesome as it
will cause the server resources to get replaced on upgrade as the
resource name changes.
These were all introduced in If4a8388634fb1dcbb47beeabbd3db005abc80d4e,
and this commit removes them.
Change-Id: I1c1267f19db972b55466f4649eda62dd7814b94a
Closes-Bug: #1723177
(cherry picked from commit 6e7a431df0b7790512eb1920500b8878701c691a)
|
|
into stable/pike
|
|
configuration" into stable/pike
|
|
into stable/pike
|
|
|
|
|
|
Due to missing puppet invocation with --detailed-exitcodes we ignored
a large amount of puppet errors during deploy. Swift storage fails
during the puppet_config step with the following error:
Debug: /Stage[main]/Swift::Storage::Object/Swift::Storage::Generic[object]/Package[swift-object]: Not tagged with file, file_line, concat, augeas, cron, swif t_proxy_config, swift_config, swift_container_config, swift_container_sync_realms_config, swift_account_config, swift_object_config, swift_object_expirer_con fig, rsync::server
Debug: /Stage[main]/Swift::Storage::Object/Swift::Storage::Generic[object]/Package[swift-object]: Resource is being skipped, unscheduling all events
Debug: Executing: '/usr/bin/systemctl is-active xinetd'
Debug: Executing: '/usr/bin/systemctl is-enabled xinetd'
Debug: Executing: '/usr/bin/systemctl unmask xinetd'
Debug: Executing: '/usr/bin/systemctl start xinetd'
Debug: Runing journalctl command to get logs for systemd start failure: journalctl -n 50 --since '5 minutes ago' -u xinetd --no-pager
Debug: Executing: 'journalctl -n 50 --since '5 minutes ago' -u xinetd --no-pager'
Error: Systemd start for xinetd failed!
The problem is that by using the rsync::server tag we end up including
the xinetd class automatically which will try to start a service inside
a container. By nooping the xinetd class, we're able avoid systemctl
calls and have a successfuly deployment. The resulting swift_rsync
container seems to work correctly:
[root@overcloud-controller-0 ~]# docker exec -it swift_rsync /bin/bash -c "ps -axuwf"
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 10 0.0 0.0 47444 1624 pts/1 Rs+ 18:16 0:00 ps -axuwf
root 1 0.0 0.0 188 4 ? Ss 17:27 0:00 /usr/local/bin/dumb-init /bin/bash /usr/local/bin/kolla_start
root 6 0.0 0.0 11036 924 ? Ss 17:27 0:00 /usr/bin/rsync --daemon --no-detach --config=/etc/rsyncd.conf
[root@overcloud-controller-0 ~]# docker logs swift_rsync 2>&1|tail -n4
INFO:__main__:Deleting /etc/rsyncd.conf
INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/rsyncd.conf to /etc/rsyncd.conf
INFO:__main__:Writing out command to execute
Running command: '/usr/bin/rsync --daemon --no-detach --config=/etc/rsyncd.conf'
Change-Id: I5e43e8fd61e002d2acc56a7de52e6aae64ab60be
Closes-Bug: #1723463
(cherry picked from commit b5eeeab73e12efecc86ea7deebc105eee0739510)
|
|
For deployments running on RHEL with Satellite 6 (or beyond) with
Capsule (Katello API enabled), the Katello API is available
on 8443 port, so the previous API ping didn't work for this case.
Capsule is now supported since we just check if katello-ca-consumer-latest
rpm is available to tell that Satellite version is 6 or beyond.
Closes-Bug: #1716777
Change-Id: If76763b367917fc15f609ad144679750602826eb
(cherry picked from commit ad3ea5bb7a2ee2cb1ae6b1d21b2f0b5a177c9fc6)
|
|
deployed-server-roles-data was out of sync and missing some parameters
introduced in Pike cycle:
This patch syncs the roles_data between 2 files.
Change-Id: If4a8388634fb1dcbb47beeabbd3db005abc80d4e
Closes-Bug: #1723177
(cherry picked from commit 0e6c86dc123e9f558c4d3d594ff50e85dd00171f)
|
|
Some services only mount this directory, not /var/lib/config-data/$service
so handle this case in the docker-puppet code that maps the mounted
volumes to the services when adding the config hash to the container
environment.
Change-Id: I3bdb7609f322458584ac9597ffbfefb057b84646
Closes-Bug: #1720208
(cherry picked from commit 3a932b056914d148fa460b8890fc0e631c817a40)
|
|
This adds a heat-api-cloudwatch-disabled.yaml and wires it up in
the resource registry. During the Ocata to Pike upgrade this service
will thus be stopped and disabled by default.
If you wish to keep the Heat Cloudwatch API then you should instead
use the provided heat-api-cloudwatch.yaml environment file.
Change-Id: I3f90a9799b90ca365f675f593371c1d3701fede6
Related-Bug: 1713531
(cherry picked from commit 4d21451666f2dd7a8935da3a7166a9afc2ccd6bd)
|
|
|
|
|
|
Before pike we used to be able to add -e environments/config-debug.yaml
and that would give us debug logs for puppet. With the move to ansible
running puppet we lost this feature.
Let's make sure that the old ConfigDebug variable still works with
the ansible playbook-based deploy steps. With this patch and ConfigDebug
set to true, we correctly get the puppet debug logs:
TASK [debug] *******************************************************************
ok: [localhost] => {
"(outputs.stderr|default('')).split('\n')|union(outputs.stdout_lines|default([]))": [
"Warning: Undefined variable 'deploy_config_name'; ",
" (file & line not available)",
"Warning: This method is deprecated, please use the stdlib validate_legacy function, with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]",
" (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:25:in `deprecation')",
"Debug: Runtime environment: puppet_version=4.8.2, ruby_version=2.0.0, run_mode=user, default_encoding=UTF-8",
"Debug: Loading external facts from /etc/puppet/modules/openstacklib/facts.d",
"Debug: Loading external facts from /var/lib/puppet/facts.d",
....
Change-Id: Ia726fb8ca4a6f7bbbd7a1284d76ff42df6825d01
Closes-Bug: #1722752
(cherry picked from commit ecc6ce340aea59faaee4c2a49cd6d6fb90d8ed35)
|
|
into stable/pike
|
|
We should not pass any hardcoded value for monitor_interface and
rely on monitor_address_block only instead.
Also removes journal_collocation which is not consumed by
newer (and stable) builds of ceph-ansible.
Change-Id: Idf213a1f43a66506f76d07102f122839b5096948
Closes-Bug: #1715246
(cherry picked from commit 3e90ae3df5a7c5491672254733ceac163b34a395)
|
|
stable/pike
|
|
This reverts commit 520be6bb4056ead8e6fad08ad96e99f7da5b341e.
This introduced a bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1501515
where during upgrade, the previous heat resource would for the
InternalApi network would have the incorrect name "Internal" and the
upgrade would try to delete the resource in order to create
"InternalApi". This needs to be reverted and a proper fix will be
submitted that accounts for this upgrade scenario.
Related-Bug: #1718764
Change-Id: Id906fac421db317ce48d5cecfcd43397a0f4ab3d
|
|
Change-Id: I88f622c0b7a92ab75c2523fdc0d4d9ac1a2a2560
Closes-Bug: #1722908
(cherry picked from commit 06331a830e8923a9dc2ef8c15f2f1bf9d1d58ba1)
|
|
These got missed in the refactoring to support composable networks.
Change-Id: I5c97df08ae84e9c383175687428fb00143d171ff
Closes-Bug: #1720849
(cherry picked from commit ef1768e40c3a6c58a22381a4546772f571bee5cc)
|
|
Currently when a network in network_data is disabled it no port
definitions for that network will be created per role. This results in
no fallback to the ctlplane IP because overriding a type in
network-isolation to noop.yaml does nothing when the port does not exist
for the role.
This patch changes the IPs when a network is disabled to be the same IPs
as ctlplane and fixes the issue, along with removing the need to use
noop.yaml override for ports (non-vip).
Closes-Bug: 1721542
Change-Id: I301370fbf47a71291614dd60e4c64adc7b5ebb42
Signed-off-by: Tim Rozet <trozet@redhat.com>
(cherry picked from commit 9285cb5fc99331ca63ff09df59f26b6018bc781b)
|
|
|
|
|
|
stable/pike
|
|
It doesn't exist in the non containerized openstack so leave it
stubbed out by default.
Closes-Bug: #1721212
Change-Id: I5fcb1f0b9958ac90f034a12f1ee733dae6571f9c
(cherry picked from commit a850d8059fbc1c36efb18773e40bb600e5da5005)
|
|
|
|
|
|
|
|
|
|
|
|
Adds a UpgradeRemoveUnusedPackages param to use
in the ansible when conditional for the removal
Adds package removal to step2 right after a service
is stopped and disabled on step2. Package updates
happen in step3 so ideally remove before that.
The package removal task has ignore_errors true
so dependencies or other issue removing packages will
not fail the upgrade workflow.
Also adds this to the upgrade environment files
for visibility and defaulting false
Change-Id: Ie4e4a2d41f7752c5a13507a7c15c6f68e203cfca
Related-Bug: 1701501
(cherry picked from commit ce0ef2fa207698c1ae61c1620fe3c5e8d1c7bfca)
|
|
Adds update_tasks for the minor update workflow. These will be
collected into playbooks during an initial 'update init' heat
stack update and then invoked later by the operator as ansible
playbooks.
Current understanding/workflow:
Step=1: stop the cluster on the updated node
Step=2: Pull the latest image and retag the it pcmklatest
Step=3: yum upgrade happens on the host
Step=4: Restart the cluster on the node
Step=5: Verification: test pacemaker services are running.
https://etherpad.openstack.org/p/tripleo-pike-updates-upgrades
Related-Bug: 1715557
Co-Authored-By: Damien Ciabrini <dciabrin@redhat.com>
Co-Authored-By: Sofer Athlan-Guyot <sathlang@redhat.com>
Change-Id: I101e0f5d221045fbf94fb9dc11a2f30706843806
(cherry picked from commit a953bda0ae615dc44d3e8a70aa7ab0160e26f3af)
|
|
|
|
We make sure to run upgrade and run os-net-config on its own. Running
os-net-config with the no-activate option will
- prevent the restart of the interface
- adjust the network files to the expected configuration so that next
run won't restart the network.
Eventually at next reboot the change will be taken into account.
Currently we have no change that are required to be taken live during
the upgrade so it safe to ignore the new parameters.
Closes-Bug: #1721073
Change-Id: I51464274d5dff8a267992ae303ac3517b78d08fb
(cherry picked from commit 5aab25bb68f62b0d7e4ffdc20d4f4da1d82a76db)
|
|
Currently the default Sensu check defined in docker/services/sensu-client.yaml
reports only first unhealthy container. This patch changes the check output
to contain list of all unhealthy containers.
Change-Id: I0a934367ef22984d9091d160ec7105092edc8149
Closes-Bug: #1720972
(cherry picked from commit 9b016c9f3fbe9552497737974b9928d1dff4d299)
|
|
Currently health check for mysql container reports unhealthy container
because there is no 'mysql' user created. This patch creates the user
during mysql_bootstrap without any permission, just to allow health
check to connect to DB and run 'select 1'.
Change-Id: Iab26da0d30939b219189d4e7beb2a61d456ab7c3
Closes-Bug: #1718944
(cherry picked from commit 3a9cfaa992e92423461d64f84d701336322bdd10)
|
|
Cold migration network is determined by the value of my_ip in nova.conf.
If this isn't set then the network with the default gateway will be used.
This patch sets my_ip and the whitelisted IP for cold migation over SSH to the
NovaApiNetwork.
Until https://bugs.launchpad.net/nova/+bug/1671288 is fixed we cannot control
the network used for live migration over SSH. It is determined by hostname
resolution.
This patch sets the whitelisted IP for live migration over SSH to the hostname
resolution network for the role - which is typically the same as NovaApiNetwork.
(NB The puppet manifest will remove duplicates).
Live migration over TLS is not affected. It can control the network used so it
configurable via NovaLibvirtNetwork.
Change-Id: Ica3f79d6d0cfae446e276172146f3a9407f2971f
Depends-On: Id22a6c990f424b9f3ca6159088540ea207460ffd
(cherry picked from commit 23331889a577b82b625610a80ecd44e164fe6cf1)
|
|
The services that docker depends on, have logging_sources and logging_groups;
but those are not set on the docker outputs so they are not used when dockers
are deployed.
Added logging_source & logging_groups as docker optional parameters in
tools/yaml-validate.py
Closes-Bug: #1718110
Change-Id: I8795eaf4bd06051e9b94aa50450dee0d8761e526
(cherry picked from commit 5dbe1121e98a794ec6a6387ff56ee34314177567)
|
|
Change-Id: Ia350e4899aa499cf27efffd9d2243e7e95fa1d65
Depends-On: I60796063fa9ebe0d98030fb982d22dabe2593ea0
Depends-On: I585b6877074353b5de62e5efaabfbe62432c473d
(cherry picked from commit f37fe4f903f429b43d22b485c29547f576ec7269)
|
|
The containerized galera service generates a galera.cnf which uses
short hostname to identify itself rather than the fqdn from the
mysql_network (e.g. overcloud-x.internalapi.cloudname).
This breaks when internal TLS is in use, because the mysql certificate
does not reference this short hostname.
Fix the appropriate hiera parameter to make it behave like the
non-containerized galera service.
Change-Id: I904cde38f2baeddab5178e8ad48d34a0c73629af
Closes-Bug: #1719599
(cherry picked from commit e10aa591dc9155a2746df01279c4ba4f2133fd17)
|
|
stable/pike
|
|
|