Age | Commit message (Collapse) | Author | Files | Lines |
|
Hugepage count has been recently bumped for virtual PODs via IDF
changes in Pharos, so align our FDio scenarios with the new RAM
requirements.
While at it, fix wrong pod_config template evaluation by moving it
after the templated scenario files are expanded, since pod_config
relies on scenario node definition.
Also, configure VPP to use decimal interface names by default to
align with Pharos macro for the VPP interface name string.
Change-Id: Ib3a89c294a3a2755567fdbe07e3be2b8ca1a5714
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
(cherry picked from commit b381277ae274473ae4e05a1aa9dd171dbab461d6)
|
|
The per port model potentially requires an increase in memory
resource requirements (which is limited by labs) to support the
same number of ports and configuration as the shared port model.
Set linux:network:openvswitch:per_port_memory explicitly to true
to enable per port mempools support for DPDK devices.
Change-Id: I130885afc50e7a047f8835113d370840827ad718
Signed-off-by: Michael Polenchuk <mpolenchuk@mirantis.com>
(cherry picked from commit 90480f70cf3d287e82830a1df157a0066807a3c9)
|
|
The per port memory model provides a more transparent memory usage model
and avoids pool exhaustion due to competing memory requirements for
interfaces. (http://docs.openvswitch.org/en/latest/topics/dpdk/memory/)
Change-Id: I5add0f49cdcdf2fc3d24affee10a275abe3ca46a
Signed-off-by: Michael Polenchuk <mpolenchuk@mirantis.com>
|
|
Fix broken systemd service unit dependecies:
- OVS should start before networking service;
- OVS ports & bridges should not be automatically ifup-ed by
networking service to avoid races, so drop 'auto' for both
(OVS ports are automatically handled when part of an OVS bridge);
- explicitly ifup OVS bridges as part of networking service, but
after all Linux interfaces have been handled;
- use 'allow-ovs br-prv' to let OVS handle br-prv and avoid another
race condition;
While at it, fix some other related issues:
- make OVS service start after DPDK service (if present);
- bump OVS-DPDK compute VMs RAM since since switching from MTU 1500
to jumbo frames for virtual PODs a while ago failed to do so [1];
- avoid creating conflicting reclass linux.network.interfaces entries
for OVS ports by using their name (drop 'ovs_port_' prefix):
* for untagged networks they will override existing common defs;
* for tagged networks, they will create separate entries;
- DPDK scenarios: make gtw01 br-prv members OVS ports to avoid race
conditions after node reboot by letting OVS handle them;
[1] https://developers.redhat.com/blog/2018/03/16/\
ovs-dpdk-hugepage-memory/
Change-Id: I0266ba67f3849b6f7e331a758146b331730bae55
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
The ovs port remains in down state after reboot if "auto" is off.
Also turn off no_wait option for odl-noha scenarios.
Change-Id: I0121b3190869528e5f2e9985f9e9299ac6c6724e
Signed-off-by: Michael Polenchuk <mpolenchuk@mirantis.com>
|
|
Handle noifupdown option for all cmd.run states
with explicit ifup call as well.
Change-Id: Ie855a0810bcfe4a856cf9d29bd0755643d71ff4d
Signed-off-by: Michael Polenchuk <mpolenchuk@mirantis.com>
|
|
- cmp, gtw: bump RAM allocation to accomodate hugepages/VPP;
for now we overcommit, gtw01 resources can probably be lowered;
- submodule: add salt-formula-neutron so we can locally patch it;
- repo:
* FD.IO repos for VPP packages;
* networking-vpp PPA for python-networking-vpp Neutron driver;
- use vpp-router for L3, disable neutron-l3-agent;
- baremetal_init: apply repo config before network (otherwise UCA
repo is missing when trying to install DPDK on baremetal nodes);
- arm64: iommu.passthrough=1 is required on ThunderX for VPP on
newer kernels;
Design quirks:
- vpp service runs as 'neutron' user, which does not exist at the
time VPP is installed and initially started, hence the need to
restart it before starting the vpp-agent service;
- gtw01 node has DPDK, yet to configure it via IDF we use the
compute-specific OVS-targeted parameters like
`compute_ovs_dpdk_socket_mem`, which is a bit misleading;
- vpp-agent requires ml2_conf.ini on ALL compute AND network nodes
to parse per-node physnet-to-real interface names;
- vpp process is bound to core '1' (not parameterized via IDF);
Change-Id: I659f7dbebcab7b154e7b1fb829cd7159b4372ec8
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
* bump salt-formula-maas git submodule;
* sync AArch64 initial salt config with the x86_64 default config;
* bump Pharos git submodule to sync `power_pass` MaaS configuration
paramater naming;
Change-Id: Ic59dd8becb6d83a9e67004c38d51681c88c4be7c
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
JIRA: FUEL-392
Change-Id: Ia21840c7561a14a5eeed3d08bf89eb2dbf9acc3a
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
* ship prebuilt salt master conf for better readability:
- enable x509.sign_remote_certificate (for prx VCP nodes);
* refactor Salt master CA handling:
- preinstall `salt_minion_dependency_packages` and
`salt_minion_reclass_dependencies` inside docker image;
- persistent /etc/pki;
- run salt.minion on cfg01 to generate master keys;
* bump container formulas to 1 Sep 2018 versions or newer:
- inject date into Docker makefile, forcing a fresh fetch of all
salt formulas from upstream git repos;
* workaround broken salt-formula-designate's meta/sphinx.yml:
- the DEB package version of salt-formula-designate uses `cmd.shell`
to query dpkg on the minion, while the git repo version still
uses `cmd.run`, running into parsing issues;
- temporarily disable sphinx metadata generation for designate until
upstream git repo syncs with the DEB version;
* upstream: salt-formula-salt AArch64 salt.control.virt support:
- retire salt-formula-salt git submodule and related patches;
* skip installing reclass distro package (already installed via pip
inside the container);
* limit initial pillar_refresh call to nodes on jumphost;
* remove unused salt-formula-nova git submodule;
JIRA: FUEL-383
Change-Id: I883b825e556f887a5e31f8a43676dcd8ece6dfde
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
* shift MTU from public bridge to physical interface
* add neutron related settings
Change-Id: Ia57d1ca7976968d6e7ee23f58a0abae1a1a256c0
Signed-off-by: Michael Polenchuk <mpolenchuk@mirantis.com>
|
|
Since we reboot all nodes, applying the network configuration via
Salt before reboot is pointless and creates a race condition with
OVS.
While at it, add `--ignore-errors` to ifup call for OVS bridge to
prevent a race condition during linux.network state apply.
Change-Id: I22fe0afaffecd7b850a6b77d7b810ed296bfc9ca
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
* Refactor OPNFV salt-formulas mechanism to resemble upstream git
structure:
- git submodules: add new submodule for each formula we patch;
- create salt-formula-x directories for OPNFV formulas;
- move mcp/metadata/service contents to their each formula subdir;
- use `make patches-import` for patches previously handled by
patch.sh;
- retire patch.sh
* states: add virtual_init:
- mostly based on old salt.sh, which is now obsolete;
- exclude salt-master service restart (it would kill the container);
* scenarios: cleanup (rm cfg01 virtual node def), adopt virtual_init;
* reclass: align our model with prebuilt container's Salt config:
- drop linux:network pillar data (handled by Docker);
- stop applying linux.system state on cfg01;
- align salt user homedir;
- drop salt-formula packages (preprovisioned);
* minor plumbing in deploy.sh and lib.sh;
JIRA: FUEL-383
Change-Id: I28708a9b399d3f19012212c71966ebda9d6fc0ac
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|