Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
|
|
Updates hieradata for changes in https://review.openstack.org/471950.
Creates a new service - NovaMigrationTarget. On baremetal this just configures
live/cold-migration. On docker is includes a container running a second sshd
services on an alternative port.
Configures /var/lib/nova/.ssh/config and mounts in nova-compute and libvirtd
containers.
Change-Id: Ic4b810ff71085b73ccd08c66a3739f94e6c0c427
Implements: blueprint tripleo-cold-migration
Depends-On: I6c04cebd1cf066c79c5b4335011733d32ac208dc
Depends-On: I063a84a8e6da64ae3b09125cfa42e48df69adc12
|
|
Change-Id: I3ea7c0c7ea049043668e68c6e637fd2aaf992622
Partial-Bug: 1700664
|
|
This currently assumes nova-compute and iscsid run in the same context which
isn't true for a containerized deployment
Change-Id: I11232fc412adcc18087928c281ba82546388376e
Depends-On: I91f1ce7625c351745dbadd84b565d55598ea5b59
Depends-On: I0cbb1081ad00b2202c9d913e0e1759c2b95612a5
|
|
So we don't waste RabbitMQ resources since nothing will actually consume
the messages sent on the queue.
Note: we don't change scenario001, since it's a Telemetry scenario and
the services require notifications enabled.
Change-Id: I7d1d80da4eda7c0385461fe62b1d3038022973c6
|
|
|
|
Sometimes the infracloud gateway refuses to ping even though
everything else is working fine. Since we have coverage of this
functionality in the OVB jobs it should be safe to turn it off
here so it stops spuriously failing our jobs.
We can't just set the resource to OS::Heat::None because there
are other resources with dependencies on it. Instead, this adds
a noop version of the validation software config that always
returns true.
Change-Id: I8361bc8be442b45c3ef6bdccdc53598fcb1d9540
Partial-Bug: 1680167
|
|
Before it was Congress, let's stay consistent and stop using CongressApi
in Docker service, because we release.
Change-Id: Id939b3d70e185da4279f3860812fa5dce27d64dd
|
|
|
|
|
|
Change-Id: I4308032891f0f9f5e93159f4a7ca29dada5850be
|
|
Let's be clear that the contents of this directory are for ci use
only and should not be used in production.
Change-Id: I3b448b9922c207b29cbdae36ee876368bda23dac
|
|
Add Ceph pool size configuration for CI where PoolDefaultSize is 1
Change-Id: I626d1398e31c3fcb9f100a8b185d71ba5909034a
|
|
|
|
|
|
Change-Id: I025ed07ce97132bce3fa7a15d170fc62e17e07a4
|
|
Change-Id: I9f9a9dcf1666b5b0475bc8fae5b785747480b7d6
|
|
Add a scenario to test containerized baremetal to the tenant.
Change-Id: I760a42c4f71bc2659c609e0014da50a7ab959c1a
|
|
Change-Id: I152f5c97d2545aa595e193218653a4b7e56c0cb6
|
|
Horizon is a pretty core service for the overcloud, so we should
test it in the gate jobs. The TripleoFirewall service is also
included so the Horizon ports get opened correctly.
Change-Id: I844b6eee547f9b4aa8e0935ab2e1e458f7a9e960
|
|
This has been renamed to multinode-containers.yaml to reflect that the
scenario isn't upgrade-specific.
Change-Id: I151792700475643a4088d98eb5e1bd7248e260cd
Depends-On: Ib04e2ccb330d73df464ad97a20908f20426a4249
|
|
|
|
|
|
When using the Deployed Server feature, we rely on Puppet to install
packages. But nova-compute/libvirt puppet is running in a container, so
it cannot install anything on the host. We rely on virtlogd on the host,
so we need to install it there some way. This patch uses host_prep_tasks
for that, conditionally based on the EnablePackageInstall stack
parameter value.
Also multinode-container-upgrade.yaml env is copied as
multinode-containers.yaml, to remove the naming confusion, as the
environment file can be used for more than just upgrades. The old env
file will be removed once we make the upgrade job use the new one (catch
22 type of issue).
Change-Id: Ia9b3071daa15bc30792110e5f34cd859cc205fb8
|
|
Add container-specific variants of scenarios. These variants are
supposed to be temporary, as their only purpose is to allow us CIing the
scenarios with containers while we don't have pacemaker containerized
yet. Once we can deploy and upgrade containerized deployment with
pacemaker, these should be deleted and normal scenarios should be used.
Alternative approach would be to edit the scenarios on the fly within
the CI job, to remove the pacemaker parts, which would be more DRY, but
perhaps more surprising when trying to debug issues.
Change-Id: I36ef3f4edf83ed06a75bc82940152e60f9a0941f
|
|
Since we disable mongodb by default, zaqar needs it in sceanio002 job.
Lets explicitly include it so it doesnt fail with:
Error while evaluating a Function Call, Could not find data item mongodb_node_ips
in any Hiera data file and no default supplied at
/etc/puppet/modules/tripleo/manifests/profile/base/zaqar.pp
Change-Id: I8f66def467d0c0175ad76f2ba5256b6a431934a8
|
|
right thing by default"
|
|
by default
The default value is 0 which has the minimum number be caluclated based on the replica count
from osd_pool_defaut_size. The default replica count is 3 and the calculated min_size is 2.
If the replica count is 1 then the min_size is 1. ie: min_size = replica - (replica/2)
Add CephPoolDefaultSize parameter to ceph-mon.yaml. This parameter defaults to 3 but can
be overriden. See puppet-ceph-devel.yaml for an example
Change-Id: Ie9bdd9b16bcb9f11107ece614b010e87d3ae98a9
|
|
This submission installs the Neutron L2 Gateway service
in the scenario 004.
This is only to check that the service is installed correctly
no sanity check is running yet.
Change-Id: I421802e9aa1a9f192860a6d72b4bb7c729666c3a
|
|
|
|
Master is now the development branch for pike
changing the release alias name.
Change-Id: I938e4a983e361aefcaa0bd9a4226c296c5823127
|
|
The pingtest template creates both default share type and
a share which should use this type. Explicit reference of
the share type should assure that the share is always created
when share type exists.
Change-Id: I756e6a8e477de8d0e46302dda26265ae482dd2e5
Closes-Bug: #1691853
|
|
All paths should be relative as we should not rely on the package
location - this can easily be overridden via --templates, and this
is exactly what we do for the upgrades job, where this will break
because we'll include the wrong (newer) version of these services
when deploying the older pre-upgrade overcloud.
Change-Id: Id8aea09305c0857253c44477945e34377cca64ca
|
|
|
|
We dont need expirer unless we have collector and standard
storage enabled. Lets turn it off by default and make it
an optional service. In upgrade scenario, we will kill the
process and stop the expirer, unless explicitly enabled.
Change-Id: Icffb7d1bb2cf7bd61026be7d2dcfbd70cd3bcbda
|
|
We need Docker service mapping defined and set to OS::Heat::None so that
we can reuse multinode-container-upgrade.yaml service list both for
initial deployment and for the upgrade. The upgrade will not be broken
by this as its env files are being passed later on the command line, and
they'll take priority and effectively enable the Docker service on
upgrade.
Another change we need for mixed upgrade is to add the TripleoPackages
service, which will take care of updating RPMs on the bare metal and
prevent docker installation from failing with outdated
puppet-tripleo ("Could not find class ::tripleo::profile::base::docker").
Related-Bug: #1685795
Closes-Bug: #1689772
Change-Id: Idb6917f22d0e9f74f8853972c6a08bffb01be410
|
|
|
|
scenario001 env in ocata has mapped PankoApi locally and it
has been removed master scenario001 env file. In tripleo.sh
upgrade command, both old (ocata) and new (master) env files
are included, because of which new service file is not used,
as it has been removed. This change is to add the PankoApi
mapping back to scenario001 env file for now. Actual fix
will be remove old env file from upgrade command of tripleo.sh.
Partial-Bug: #1685759
Change-Id: I4a8ee38d990a1980eea6ec63f2780357d040ded4
|
|
Ceilometer collector is deprecated in Pike release.
Do not deploy by default. Instead use the pipeline
yaml to configure the publisher directly.
Closes-bug: #1676961
Change-Id: Ic71360c6307086d5393cd37d38ab921de186a2e0
|
|
|
|
The [Pre|Post]Puppet resources were renamed in
https://review.openstack.org/#/c/365763.
This was intended for having a pre/post deployment
steps using an agnostic name instead of
being attached to a technology.
The renaming was unintentionally reverted in
https://review.openstack.org/#/c/393644/ and
https://review.openstack.org/#/c/434451.
This submission merge both resources into one,
and remove the old pre|post hooks.
Closes-bug: #1669756
Change-Id: Ic9d97f172efd2db74255363679b60f1d2dc4e064
|
|
|
|
This change implements a MOTD message and provides a hash of
sshd config options which are sourced to the puppet-ssh module
as a hash.
The SSHD puppet service is enabled by default, as it is
required for Idb56acd1e1ecb5a5fd4d942969be428cc9cbe293.
Also added the service to the CI roles.
Change-Id: Ie2e01d93082509b8ede37297067eab03bb1ab06e
Depends-On: I1d09530d69e42c0c36311789166554a889e46556
Closes-Bug: #1668543
Co-Authored-By: Oliver Walsh <owalsh@redhat.com>
|
|
We disabled it because it stopped working. Let's see how it works now.
Change-Id: If1efb86cb1d6ada357d4562408a566ac702fb6be
Closes-Bug: #1646506
|
|
|
|
|
|
Non-working containers upgrade CI is caused by the fact that all
multinode jobs deploy pacemaker environments.
Currently we cannot upgrade Pacemakerized deployments
anyway (containerization of pacemakerized services is WIP), upgrades
have only been tested with non-Pacemaker deployments so far.
We need a new environment which will not try deploying in a
pacemakerized way. When pacemaker-managed services are containerized, we
can change the job to upgrade an HA deployment (or single-node "HA" at
least), and perhaps even get rid of the environment file introduced
here, and reuse multinode.yaml.
Change-Id: Ie635b1b3a0b91ed5305f38d3c76f6a961efc1d30
Closes-Bug: #1682051
|
|
We need the service to be present to run jobs involving containers. Note
that this is effectively a no-op for the current CI jobs, as by default
the Docker service is mapped to OS::Heat::None. Docker will actually be
deployed only if environments/docker.yaml is included in the deploy
command.
Change-Id: I97a35e30e428ff64feeb411bf63dbb7aa54f9829
|
|
This submission will enable the BGPVPN API
on scenario004.
This addition to scenario004 does not
provide any sanity check for the Neutron API
extension. At this stage is meant to
install the required packages and prerequisites,
configure the extension and
having the services started correctly.
In the README.rst file, this is displayed as
neutron-bgpvpn, so for further integrations
should be added as neutron-<extension_name>
for an easier reading.
Depends-On: I4d0617b0d7801426ea6827e70f5f31f10bbcc038
Depends-On: I2be0fab671ec1a804d029afc6dc27d19a193b064
Change-Id: I6c257417a9231c44e13535bc408d67d2a3cacbf8
|