diff options
author | Alexandru Avadanii <Alexandru.Avadanii@enea.com> | 2018-11-08 19:06:46 +0100 |
---|---|---|
committer | Alexandru Avadanii <Alexandru.Avadanii@enea.com> | 2019-01-10 15:22:59 +0100 |
commit | 6eaeafea81f30ee66f2c187596815a0166f1132b (patch) | |
tree | ab296858aac68b09b2a905486aa072dc8c304e0c /mcp/reclass/classes/cluster/mcp-fdio-noha/openstack/control.yml | |
parent | 920d7ef405e9fed9af7b0576b94f6d19fe265c81 (diff) |
[noha] Bring in FDIO (VPP+DPDK) scenario
- cmp, gtw: bump RAM allocation to accomodate hugepages/VPP;
for now we overcommit, gtw01 resources can probably be lowered;
- submodule: add salt-formula-neutron so we can locally patch it;
- repo:
* FD.IO repos for VPP packages;
* networking-vpp PPA for python-networking-vpp Neutron driver;
- use vpp-router for L3, disable neutron-l3-agent;
- baremetal_init: apply repo config before network (otherwise UCA
repo is missing when trying to install DPDK on baremetal nodes);
- arm64: iommu.passthrough=1 is required on ThunderX for VPP on
newer kernels;
Design quirks:
- vpp service runs as 'neutron' user, which does not exist at the
time VPP is installed and initially started, hence the need to
restart it before starting the vpp-agent service;
- gtw01 node has DPDK, yet to configure it via IDF we use the
compute-specific OVS-targeted parameters like
`compute_ovs_dpdk_socket_mem`, which is a bit misleading;
- vpp-agent requires ml2_conf.ini on ALL compute AND network nodes
to parse per-node physnet-to-real interface names;
- vpp process is bound to core '1' (not parameterized via IDF);
Change-Id: I659f7dbebcab7b154e7b1fb829cd7159b4372ec8
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
(cherry picked from commit 455b46a6be4bca145c047ed6957727c119285796)
Diffstat (limited to 'mcp/reclass/classes/cluster/mcp-fdio-noha/openstack/control.yml')
-rw-r--r-- | mcp/reclass/classes/cluster/mcp-fdio-noha/openstack/control.yml | 61 |
1 files changed, 60 insertions, 1 deletions
diff --git a/mcp/reclass/classes/cluster/mcp-fdio-noha/openstack/control.yml b/mcp/reclass/classes/cluster/mcp-fdio-noha/openstack/control.yml index 3c2cdefaa..0faf1b86a 100644 --- a/mcp/reclass/classes/cluster/mcp-fdio-noha/openstack/control.yml +++ b/mcp/reclass/classes/cluster/mcp-fdio-noha/openstack/control.yml @@ -7,6 +7,65 @@ ############################################################################## --- classes: - - system.neutron.control.openvswitch.single - cluster.mcp-common-noha.openstack_control - cluster.mcp-fdio-noha + - system.neutron.control.single + - service.etcd.server.single + - system.galera.server.database.neutron +# NOTE: All this configuration should later be moved to reclass.system as +# neutron.control.vpp.single +parameters: + _param: + # yamllint disable rule:truthy + neutron_control_dvr: True + neutron_l3_ha: False + neutron_enable_qos: False + neutron_enable_vlan_aware_vms: False + neutron_enable_bgp_vpn: False + # yamllint enable rule:truthy + neutron_global_physnet_mtu: 1500 + neutron_external_mtu: 1500 + neutron_bgp_vpn_driver: bagpipe + internal_protocol: 'http' + neutron_firewall_driver: 'iptables_hybrid' + openstack_node_role: primary + neutron: + server: + role: ${_param:openstack_node_role} + global_physnet_mtu: ${_param:neutron_global_physnet_mtu} + l3_ha: ${_param:neutron_l3_ha} + dvr: ${_param:neutron_control_dvr} + qos: ${_param:neutron_enable_qos} + vlan_aware_vms: ${_param:neutron_enable_vlan_aware_vms} + firewall_driver: ${_param:neutron_firewall_driver} + bgp_vpn: + enabled: ${_param:neutron_enable_bgp_vpn} + driver: ${_param:neutron_bgp_vpn_driver} + backend: + engine: ml2 + router: 'vpp-router' + tenant_network_types: "${_param:neutron_tenant_network_types}" + external_mtu: ${_param:neutron_external_mtu} + mechanism: + vpp: + driver: vpp + etcd_port: ${_param:node_port} + etcd_host: ${_param:node_address} + l3_hosts: ${_param:openstack_gateway_node01_hostname} + physnets: + physnet1: + vpp_interface: ${_param:external_vpp_tap} + physnet2: + # NOTE: Not a meaningful interface name, just avoid a filter-out + vpp_interface: 'dummy' + vlan_range: '${_param:opnfv_net_tenant_vlan}' + compute: + region: ${_param:openstack_region} + database: + host: ${_param:openstack_database_address} + identity: + region: ${_param:openstack_region} + protocol: ${_param:internal_protocol} + message_queue: + members: + - host: ${_param:single_address} |