diff options
author | Alexandru Avadanii <Alexandru.Avadanii@enea.com> | 2018-11-08 19:06:46 +0100 |
---|---|---|
committer | Alexandru Avadanii <Alexandru.Avadanii@enea.com> | 2019-01-09 15:39:05 +0100 |
commit | 455b46a6be4bca145c047ed6957727c119285796 (patch) | |
tree | 717062888465d74227a45d724fab21e9c8fd5957 /mcp/config | |
parent | ad2bdf2eb08c0991757f30d370c90d5c9d814d3e (diff) |
Bring in FDIO (VPP+DPDK) scenario
- cmp, gtw: bump RAM allocation to accomodate hugepages/VPP;
for now we overcommit, gtw01 resources can probably be lowered;
- submodule: add salt-formula-neutron so we can locally patch it;
- repo:
* FD.IO repos for VPP packages;
* networking-vpp PPA for python-networking-vpp Neutron driver;
- use vpp-router for L3, disable neutron-l3-agent;
- baremetal_init: apply repo config before network (otherwise UCA
repo is missing when trying to install DPDK on baremetal nodes);
- arm64: iommu.passthrough=1 is required on ThunderX for VPP on
newer kernels;
Design quirks:
- vpp service runs as 'neutron' user, which does not exist at the
time VPP is installed and initially started, hence the need to
restart it before starting the vpp-agent service;
- gtw01 node has DPDK, yet to configure it via IDF we use the
compute-specific OVS-targeted parameters like
`compute_ovs_dpdk_socket_mem`, which is a bit misleading;
- vpp-agent requires ml2_conf.ini on ALL compute AND network nodes
to parse per-node physnet-to-real interface names;
- vpp process is bound to core '1' (not parameterized via IDF);
Change-Id: I659f7dbebcab7b154e7b1fb829cd7159b4372ec8
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
Diffstat (limited to 'mcp/config')
-rw-r--r-- | mcp/config/scenario/os-nosdn-fdio-noha.yaml | 33 | ||||
-rwxr-xr-x | mcp/config/states/baremetal_init | 2 | ||||
-rwxr-xr-x | mcp/config/states/openstack_noha | 2 |
3 files changed, 34 insertions, 3 deletions
diff --git a/mcp/config/scenario/os-nosdn-fdio-noha.yaml b/mcp/config/scenario/os-nosdn-fdio-noha.yaml index b52a89cf4..747adbee2 100644 --- a/mcp/config/scenario/os-nosdn-fdio-noha.yaml +++ b/mcp/config/scenario/os-nosdn-fdio-noha.yaml @@ -24,4 +24,35 @@ virtual: vcpus: 4 ram: 14336 gtw01: - ram: 2048 + vcpus: 8 + ram: 8192 + cpu_topology: + sockets: 1 + cores: 4 + threads: 2 + numa: + cell0: + memory: 8388608 + cpus: 0-7 + cmp001: + vcpus: 8 + ram: 8192 + cpu_topology: + sockets: 1 + cores: 4 + threads: 2 + numa: + cell0: + memory: 8388608 + cpus: 0-7 + cmp002: + vcpus: 8 + ram: 8192 + cpu_topology: + sockets: 1 + cores: 4 + threads: 2 + numa: + cell0: + memory: 8388608 + cpus: 0-7 diff --git a/mcp/config/states/baremetal_init b/mcp/config/states/baremetal_init index 358e1874d..ba7ae30e5 100755 --- a/mcp/config/states/baremetal_init +++ b/mcp/config/states/baremetal_init @@ -27,7 +27,7 @@ salt -C "${cluster_nodes_query}" file.replace $debian_ip_source \ repl="\n if not __salt__['pkg.version']('vlan'):\n __salt__['pkg.install']('vlan')" salt -C "${cluster_nodes_query}" pkg.install bridge-utils -salt -C "${control_nodes_query}" state.apply linux.network,linux.system.kernel +salt -C "${control_nodes_query}" state.apply linux.system.repo,linux.network,linux.system.kernel wait_for 5.0 "salt -C '${cluster_nodes_query}' state.apply salt.minion" wait_for 5.0 "salt -C '${compute_nodes_query}' state.apply linux.system,linux.network" wait_for 30.0 "salt -C '${cluster_nodes_query}' test.ping" diff --git a/mcp/config/states/openstack_noha b/mcp/config/states/openstack_noha index 98e2eff73..01f686b1f 100755 --- a/mcp/config/states/openstack_noha +++ b/mcp/config/states/openstack_noha @@ -38,7 +38,7 @@ salt -I 'heat:server' state.sls heat salt -I 'cinder:controller' state.sls cinder wait_for 3 "salt -I 'cinder:volume' state.sls cinder" -salt -I 'neutron:server' state.sls neutron +salt -I 'neutron:server' state.sls etcd,neutron salt -I 'neutron:compute' state.sls neutron salt -I 'nova:compute' state.sls nova |