Age | Commit message (Collapse) | Author | Files | Lines |
|
Optimized for LF-POD2 as nic assigned to private/dpdk interface
and pinned cores resides on numa #0. Core #11 is for DPDK,
the rest four cores for PMDs.
Change-Id: Icca701bc1a66f3672b8511e0245c82ca29788a8b
Signed-off-by: Michael Polenchuk <mpolenchuk@mirantis.com>
(cherry picked from commit f2e8f88cced6b61adbd8d22763253fdd9f8ec0e6)
|
|
Change-Id: Id75ffe4db808a4ec250ba8b86c5d49f1206c3784
Signed-off-by: Michael Polenchuk <mpolenchuk@mirantis.com>
|
|
- replace mas01 VM with a Docker container;
- drop `mcpcontrol` virsh-managed network, including special handling
previously required for it across all scripts;
- drop infrastructure VMs handling from scripts, the only VMs we still
handle are cluster VMs for virtual and/or hybrid deployments;
- drop SSH server from mas01;
- stop running linux state on mas01, as all prerequisites are properly
handled durin Docker build or via entrypoint.sh - for completeness,
we still keep pillar data in sync with the actual contents of mas01
configuration, so running the state manually would still work;
- make port 5240 available on the jumpserver for MaaS dashboard access;
- docs: update diagrams and text to reflect the new changes;
Change-Id: I6d9424995e9a90c530fd7577edf401d552bab929
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
Baremetal clusters might benefit from having a little more time
to plug in the VIFs.
Change-Id: I9406a0ef24de2177827b3acd27b7c60b293a4572
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
The first VMs spawned still exhibit the race condition described in
the ticket, so apply the same workaround proposed during the Fraser
release cycle in FDS.
JIRA: FDS-156
Change-Id: I3b2b1ed7b5711daf81b5f4a263e4dbee9f502259
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
- cmp, gtw: bump RAM allocation to accomodate hugepages/VPP;
for now we overcommit, gtw01 resources can probably be lowered;
- submodule: add salt-formula-neutron so we can locally patch it;
- repo:
* FD.IO repos for VPP packages;
* networking-vpp PPA for python-networking-vpp Neutron driver;
- use vpp-router for L3, disable neutron-l3-agent;
- baremetal_init: apply repo config before network (otherwise UCA
repo is missing when trying to install DPDK on baremetal nodes);
- arm64: iommu.passthrough=1 is required on ThunderX for VPP on
newer kernels;
Design quirks:
- vpp service runs as 'neutron' user, which does not exist at the
time VPP is installed and initially started, hence the need to
restart it before starting the vpp-agent service;
- gtw01 node has DPDK, yet to configure it via IDF we use the
compute-specific OVS-targeted parameters like
`compute_ovs_dpdk_socket_mem`, which is a bit misleading;
- vpp-agent requires ml2_conf.ini on ALL compute AND network nodes
to parse per-node physnet-to-real interface names;
- vpp process is bound to core '1' (not parameterized via IDF);
Change-Id: I659f7dbebcab7b154e7b1fb829cd7159b4372ec8
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
While at it, rename FDIO (VPP) scenarios to align with OPNFV FDS
and OPNFV Apex projects.
Change-Id: I9aab5dc4a0dc41a2cc996687a8a2726d03288678
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|