Age | Commit message (Collapse) | Author | Files | Lines |
|
Baremetal clusters might benefit from having a little more time
to plug in the VIFs.
Change-Id: I9406a0ef24de2177827b3acd27b7c60b293a4572
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
The first VMs spawned still exhibit the race condition described in
the ticket, so apply the same workaround proposed during the Fraser
release cycle in FDS.
JIRA: FDS-156
Change-Id: I3b2b1ed7b5711daf81b5f4a263e4dbee9f502259
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
- cmp, gtw: bump RAM allocation to accomodate hugepages/VPP;
for now we overcommit, gtw01 resources can probably be lowered;
- submodule: add salt-formula-neutron so we can locally patch it;
- repo:
* FD.IO repos for VPP packages;
* networking-vpp PPA for python-networking-vpp Neutron driver;
- use vpp-router for L3, disable neutron-l3-agent;
- baremetal_init: apply repo config before network (otherwise UCA
repo is missing when trying to install DPDK on baremetal nodes);
- arm64: iommu.passthrough=1 is required on ThunderX for VPP on
newer kernels;
Design quirks:
- vpp service runs as 'neutron' user, which does not exist at the
time VPP is installed and initially started, hence the need to
restart it before starting the vpp-agent service;
- gtw01 node has DPDK, yet to configure it via IDF we use the
compute-specific OVS-targeted parameters like
`compute_ovs_dpdk_socket_mem`, which is a bit misleading;
- vpp-agent requires ml2_conf.ini on ALL compute AND network nodes
to parse per-node physnet-to-real interface names;
- vpp process is bound to core '1' (not parameterized via IDF);
Change-Id: I659f7dbebcab7b154e7b1fb829cd7159b4372ec8
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
While at it, rename FDIO (VPP) scenarios to align with OPNFV FDS
and OPNFV Apex projects.
Change-Id: I9aab5dc4a0dc41a2cc996687a8a2726d03288678
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|