Age | Commit message (Collapse) | Author | Files | Lines |
|
The Nova Placement API's configuration currently relies
on the nova-api profile for its keystone authtoken
configuration. This means that Nova Placement would
fail if it got installed on an isolated node or
docker container (this currently breaks TripleO's
deployment of placement via docker).
This patch creates a new authtoken profile and
calls it via the api and placement roles.
Change-Id: I7b38ab6ba5cae41689ac500d97dec4d09c73d387
Co-Authored-By: Alex Schultz <aschultz@redhat.com>
|
|
|
|
|
|
- transform nova_api_wsgi_enabled in a parameter
- update rspec tests
- fix TLS to run at step 1
Change-Id: I4d3f9c92f0717ae8c3bc8d71065fab281de82008
|
|
We need to run nova-cell_v2-discover_hosts at the very end of the
deployment because nova database needs to be aware of all registred
compute hosts.
1. Move keystone resources management at step 3.
2. Move nova-compute service at step 4.
3. Move nova-placement-api at step 3.
5. Run nova-cell_v2-discover_hosts at step 5 on one nova-api node.
6. Run neutron-ovs-agent at step 5 to avoid racy deployments where
it starts before neutron-server when doing HA deployments.
With that change, we expect Nova aware of all compute services deployed
in TripleO during an initial deployment.
Depends-On: If943157b2b4afeb640919e77ef0214518e13ee15
Change-Id: I6f2df2a83a248fb5dc21c2bd56029eb45b66ceae
Related-Bug: #1663273
Related-Bug: #1663458
|
|
Cleanup patch once the THT patch is merged.
Change-Id: Iba439a4758a4728197d7620b764a4f0f2648ee0f
Depends-On: I09b73476762593642a0e011f83f0233de68f2c33
|
|
It was suggested by Nova team to not deploying Nova API in WSGI with
Apache in production.
It's causing some issues that we didn't catch until now (see in the bug
report). Until we figure out what was wrong, let's disable it so we can
move forward in the upgrade process.
Related-Bug: 1661360
Co-Authored-By: Juan Antonio Osorio Robles <jaosorior@redhat.com>
Change-Id: Ia87b5bdea79e500ed41c30beb9aa9d6be302e3ac
|
|
it's not required in Ocata, let's configure the basic setup for cells.
note: it also cleanup old code that is not valid anymore.
Change-Id: Iac5b2fbe1b03ec7ad4cb8cab2c7694547be6957d
|
|
This feature is broken for us now and there is work in progress in Nova
to improve nova cell deployment.
Until it's fixed upstream, we need to disable cells deployment for now,
so we can promote our CI.
Change-Id: I379ba9e94a92ed225a03a67fc975b542447a9c8b
Related-Bug: #1649341
|
|
Having the db_sync code live in the mysql profile causes
coupling that doesn't work unless your MySQL server has the
latest Nova packages installed. This may not work for some
baremetal setups (where an isolated database exists) or
with containers where the MySQL container definately doesn't
have nova packages installed.
Moving this code into the nova-api role also matches where we
were already db syncing the normal API database so it should be
fine and safe.
Change-Id: Ib625e2ac9c8d6bd1d335c58e291facc4ea5839ae
Co-Authored-By: Alex Schultz <aschultz@redhat.com>
|
|
nova::wsgi::apache was deprecated in ocata in favor of
nova::wsgi::apache_api.
Let's switch to it.
Change-Id: I59b3b36be33268fa6e261a7db3c4aa8e8e712ffb
Depends-On: I5fc99062d349597393e2248c66f2d863029c7730
|
|
This optionally enables TLS for Nova API in the internal network.
If internal TLS is enabled, each node that is serving the Nova API
service will use certmonger to request its certificate.
Note that this doesn't enable internal TLS for the nova metadata
service since it doesn't run over httpd. This will be handled in
a later commit.
bp tls-via-certmonger
Change-Id: I88380a1ed8fd597a1a80488cbc6ce357f133bd70
|
|
|
|
|
|
This patch updates the Nova profile so that we set memcached
servers correctly for the Nova keystone auth_token middleware.
Most of the hiera settings for ::nova::keystone::authtoken are
already included in the t-h-t nova-api service.
Change-Id: I3b7ff02abbd0d5e0c38232d02b33e4c7bc411120
Closes-bug: #1633595
|
|
The patch making nova run over httpd had added migration logic to
stop nova-api, However, this doesn't work since nova-metadata is
running over the same process. Now, the fact that is was running
seems to be just luck, since the systemctl runs, then we start the
service via the nova::api resource. So this is fragile in it's
current state.
This then removes the exec, as we don't need it for the migration.
Change-Id: I4603b81d30a704b07eef461b3cdbfe164614b04f
|
|
We can now get this parameter from t-h-t, so it's not needed here.
Change-Id: I014e7b3a6feb5609ace2e8ef1e4df11448b0a0cc
Depends-On: Ic229182cc5c887b57f6182c3db1bac8bed330f7c
|
|
This adds the necessary resources to the manifest to migrate nova
to run over httpd. The service name will be moved to t-h-t in a
subsequent commit, but since this patch depends on t-h-t, we try to
avoid circular dependencies of repos.
Change-Id: I91d430a3871672f90b0f885736f067ddae3c238c
Depends-On: I57fb20cf0d58b3376243ba4aeb04e995e7152ce3
|
|
This patch moves the various DB syncs into the MySQL role.
Database creation needs to occur on the MySQL server to
avoid permission issues.
This patch also moves database creation to step 2 so we can
guarantee that all per-service databases exist at this time.
This avoids complex ordering needed during step 3 where
services, on different hosts, can run their own db sync's
in a distributed fashion.
Change-Id: I05cc0afa9373429a3197c194c3e8f784ae96de5f
Partial-bug: #1620595
|
|
|
|
In the Next Generation HA architecture a number of active/active services
will be run via systemd. In order for this to work we need to make sure that
the sync_db operation only takes place on the bootstrap node, just like it is
done today for the pacemaker profiles.
We do this by removing sync_db as a parameter and instead set it to true
or false depending if the hostname matches the bootstrap_node as it is done
today in the pacemaker role.
Note that we call hiera('bootstrap_nodeid', undef) because if a profile
is included on a non controller node that variable will be undefined.
The following testing was done:
- HA puppet-pacemaker.yaml scenario with three computes
- NonHA with one controller
- NonHA with three controllers
Fixes-Bug: 1600149
Co-Author: cmsj@tenshu.net
Change-Id: I04a7b9e3c18627ea512000a34357acb7f27d6e0e
Implements: blueprint ha-lightweight-architecture
|
|
The code was in THT before but now in the Nova API profile.
Change-Id: I7035f7998c11dc5508dae8c1a750b93c2944b2d4
|
|
We perform the Galera setup in step 2 so there is no guarantee that the
database will be available in that same step [1].
We used to implement a dependency in puppet using the 'galera-ready'
resource (clustercheck) but this is not possible with roles because we
also don't have any guarantee about clustercheck being installed on the
same node.
Because of the above all services must create/sync their databases
in a later step. This patch fixes Nova API and Neutron Server, the other
services use step 3 already.
1. https://github.com/openstack/tripleo-heat-templates/blob/master/puppet/services/README.rst
Change-Id: I22750ffb64afbe40b5560a6a0d0dabc5b8927d32
|
|
|
|
Move nova::db classes from THT to puppet-tripleo in Nova API profile.
Implements: blueprint refactor-puppet-manifests
Change-Id: I4fc3cb822822adc1c58b2cfa2de8584a73fa6427
|
|
It was included in THT before, but it's now in nova/api role.
It will also be added in nova/compute role later.
Change-Id: I6b5857d3d4740c0bf3f748719f30a05f1c62cb59
|
|
Change-Id: I1dde63a5a7d1624494a7157a9679f88f4cb780e0
Implements: blueprint refactor-puppet-manifests
|