Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
|
|
|
|
Adds a new nested stack deployment which allows operators to
opt-in to deploy tarball's and RPM packages by setting
DeployArtifactURLs as a parameter_default in a Heat
environment.
The intent is to use this setting to allow t-h-t to
transparently deploy things like tarballs of puppet modules
via a Swift Temp URL.
Change-Id: I1bad4a4a79cf297f5b6e439e0657269738b5f326
Implements: blueprint puppet-modules-deployment-via-swift
|
|
|
|
|
|
|
|
Due to an incorrect rebase, d0dcb9401c868786df58f5801a431392b8e89df8
dropped the changes made in dd7602ad82100617126be26d80a6d3f67cb739ac to
add a vncproxy to the endpoint map. This change restores them.
Change-Id: Ifef7f955481405d5fe39ba48c8b1a79aa9c170f2
|
|
|
|
|
|
|
|
|
|
This change adds a sample network-environment.yaml file to the
environments. This sample includes pointers to NIC config files,
as well as default network subnets and allocation pools.
This is meant to be a demonstration of the default settings for
a virtual deployment. In a real deployment, the operator would
customize the settings here and point to custom NIC config
templates.
Change-Id: I0288c0680effea06b5f805a0d955e8bbf6152ba6
|
|
|
|
|
|
Populates /etc/hosts with an entry for each IP address the node
is on, which will be useful to migrate services configuration from
using IPs into using hostnames.
This is how the lines look like on a host which doesn't have all ports:
172.16.2.6 overcloud-novacompute-0.localdomain overcloud-novacompute-0
192.0.2.9 overcloud-novacompute-0-external
172.16.2.6 overcloud-novacompute-0-internalapi
172.16.1.6 overcloud-novacompute-0-storage
192.0.2.9 overcloud-novacompute-0-storagemgmt
172.16.0.4 overcloud-novacompute-0-tenant
192.0.2.9 overcloud-novacompute-0-management
the network against which the default (or primary) name is resolved
can be configured (for computes) via ComputeHostnameResolveNetwork
Change-Id: Id480207c68e5d68967d67e2091cd081c17ab5dd7
|
|
During upgrades, we only run Puppet on the whole deployment to converge
the state, after the upgrade workflow itself has been fully
completed. That is an opportunity to utilize Puppet to make sure Nova
Compute RPC doesn't remain pinned to the older version.
Change-Id: I6ebc813a80dfd9dfbbb213c38724487e044507b8
|
|
A stack is an extremely heavyweight abstraction in Heat. Particularly in
TripleO, every stack includes a copy of all the template and environment
data for all of the stacks in the tree, all of which must be stored anew
in the database.
The EndpointMap abstraction created no fewer than 30 nested stacks, none
of which contained any resources but which existed purely for the
purpose of abstracting out some intrinsic functions used to calculate
the endpoint URLs for the various services. This likely adds several GB
to the memory requirements of the undercloud, and can cause things to
slow to a crawl since all 30 nested stacks need to be queried whenever
we need data from any one of them.
This change eliminates the nested stacks and instead generates the
endpoint map statically. This can be done offline in less than 250ms,
allows the input data to be expressed in an even more human-readable
form, and reduces the runtime overhead of the endpoints map by a factor
of 31, all with no loss of functionality, compatibility or flexibility.
Since we don't run a setup script to generate the tarball, the
endpoint_map.yaml output is checked in to source control. The build
script offers a --check option that can be used to make sure that the
output file is up-to-date with the input data.
Change-Id: I2df8f5569d81c1bde417ff5b12b06b7f1e19c336
|
|
|
|
|
|
|
|
This parameter can be used for pinning (and later unpinning) the Nova
Compute RPC version.
Change-Id: I2f181f3b01f0b8059566d01db0152a12bbbd1c3e
|
|
Change-Id: I7226070aa87416e79f25625647f8e3076c9e2c9a
|
|
Add Heat software deployments to be used to upgrade major versions of
OpenStack on the controller nodes. All controller services are taken
down while the upgrade is in progress.
The new updated yum repositories should be configured by another process
e.g. the deployment artifacts transfer via Swift.
Change-Id: Ia0a04e4a11d67e7a5acc53c1f8a8f01ed5ca8675
Co-Authored-By: Giulio Fidente <gfidente@redhat.com>
Co-Authored-By: Jiri Stransky <jistr@redhat.com>
|
|
Our current nova-neutron configuration does not work with
the latest puppet-nova. In particular, this patch[1].
This commit adds keystone v3 endpoints to the map and gets the
nova::network::neutron configuration to use them.
[1] https://github.com/openstack/puppet-nova/commit/d09868a59c451932d67c66101b725182d7066a14
Change-Id: Ifb8c23c81c665c2732fa5cd757760668b06a449a
|
|
See RHBZ 1311005 and 1247303. In short: sometimes when a controller
node gets fenced, rabbitmq is unable to rejoin the cluster. To fix this
we need two steps:
1) The fix for the RA in BZ 1247303
2) Add notify=true to the meta parameters of the rabbitmq resource on
fresh installs and updates
Note that if this change is applied on systems that do not
have the fix for the rabbitmq resource agent, no action is taken.
So when the resource agent will be updated, the notify
operation will start to work as soon as the first monitor
action will take place.
Fixes RH Bug #1311005
Change-Id: I513daf6d45e1a13d43d3c404cfd6e49d64e51d5a
|
|
This change adds extra config yaml files for big switch agent
and big switch lldp.
This change is mainly for compute nodes. The changes related
to controller nodes are landed at e78e1c8d9b5a7ebf327987b22091bff3ed42d1c1
This change also removes the neutron_enable_bigswitch_ml2 flag. Instead,
User needs to specify NeutronMechanismDrivers: bsn_ml2 in environment file.
Previous discussion about this change can be found at an abandoned
review request https://review.openstack.org/#/c/271940/
Depends-On: Iefcfe698691234490504b6747ced7bb9147118de
Change-Id: I81341a4b123dc4a8312a9a00f4b663c7cca63d7c
|
|
This commit ensures we are not using any deprecated parameters for
nova::network::neutron and are using the right variable names.
Change-Id: Ic1b41e2cdbb6b180496822cc363c433e9388aa02
|
|
|
|
|
|
By configuring the Cinder 'host' setting via the appropriate class
param instead of cinder_config we don't risk to override it if the
user is to pass additional config settings using cinder_config in
ExtraConfig.
Change-Id: Idf33d87e08355b5b4369ccb0001db8d4c3b4c20f
|
|
|
|
|
|
|
|
This change adds puppet hieradata settings which disable IPv6
autoconfiguration and accept_ra by default on all interfaces.
When IPv6 is used, the interfaces are individually enabled and
configured with static IP addresses.
The networking on the compute host needs to be completely
separate from the tenant networking, in order to safeguard the
compute host and isolate tenant traffic. This change disables
IPv6 autoconfiguration and acceptance of RAs by default on
interfaces unless specifically enabled.
Without these settings, IPv6 is enabled on all interfaces, as well
as autoconfiguration and accept_ra, so when the compute host
creates a bridge interface for the router (qbr-<ID>), the
compute node will automatically assign an IPv6 address and will
install a default IPv6 route on the bridge interface when it
receives the RAs from the Neutron router.
The change to turn off autoconfiguration means that interfaces
will not self-assign an IPv6 address, and the change to not accept
RAs is a security hardening feature. This requires that a
static gateway address be declared in the network environment
in the parameter ExternalNetworkDefaultRoute. Alternately, sysctl
can be modified to change the accept_ra behavior for specific
interfaces.
Change-Id: I8a8d311a14b41baf6e7e1b8ce26a63abc2eaabef
Closes-bug: 1544296
|
|
|
|
|
|
This change adds the TripleO Heat Parameters and Puppet hieradata
to support setting the MTU for Neutron tenant networks. A new
parameter, NeutronTenantMtu is introduced, and this gets used for
the NeutronDnsmasqOptions and in Puppet hieradata.
NeutronTenantMtu is also used in the Puppet hieradata for both the
compute and control nodes. Two values are set:
nova::compute::network_device_mtu
which sets /etc/nova/nova.conf: network_device_mtu = <NeutronTenantMtu>
neutron::network_device_mtu
which sets in /etc/neutron/neutron.conf:
network_device_mtu = <NeutronTenantMtu>
finally, the NeutronDnsmasqOptions parameter becomes a str_format
that maps the NeutronTenantMtu onto the DHCP options,
so a default of 'dhcp-option-force=26,%MTU%' would be formatted to
'dhcp-option-force=26,1300' if NeutronTenantMtu were 1300.
This will set dnsmasq to serve an MTU via DHCP that matches the
NeutronTenantMtu:
/etc/neutron/dnsmasq-neutron.conf:dhcp-option-force=26,1300
Typically, you would change all three of these settings to use small
or jumbo frames in VMs. When using tunneling, NeutronTenantMtu
should be set at least 50 bytes smaller than the physical network
MTU in order to make room for tunneling overhead.
Note that this change does not support setting the MTU on veth
interfaces if veth patches are used to br-int instead of OVS
patches.
Change-Id: I38840e082ee01dc3b6fc78e1dd97f53fa4e63039
|
|
|
|
Currently the permissions for the CA file that is injected (if the
environment is set), doesn't permit users that don't belong to the group
that owns the file to read it. This is too restrictive and isn't
necessary, as the certificate should be public.
This is useful in the case where we want a service that can't read the
certificate chain (or bundle) to be able to read that CA certificate.
This is the case for the MariaDB version that is being used in CentOS
7.1 for example.
Change-Id: I6ff59326a5570670c031b448fb0ffd8dfbd8b025
|
|
|
|
|
|
We were incorrectly wiring the rbd user to the relevant glance
module parameter, making it was impossible to customize the
rbd user when using an external Ceph.
Change-Id: Ibe4eaedf986a9077f869c6530381e69ee0281f5b
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|