Age | Commit message (Collapse) | Author | Files | Lines |
|
On some aarch64 platforms (e.g. ThunderX 1), lvcreate manifests some
spurious timing issues resulting in incomplete/corrupted LVM thin
creation and eventually to transaction ID mismatch between userspace
and kernel space.
This eventually leads to cinder-volume issues, either when creating
the thin storage pool (vgroot-pool) and/or when creating the LVs
inside said pool.
The issue manifests spuriously on Ubuntu Bionic + UCA, so until a
working combination of userspace/kernel is found, work around this
by bumping the kernel package to hwe-18.04 (kernel 5.0),
effectively bypassing the timing issues during volume creation.
This affects all cluster machines (both HA and NOHA scenarios,
baremetal and virtual, x86_64 and aarch64, baremetal and virtualized
nodes).
Note: Ubuntu Bionic cloud image partition handling requires e2fsprogs
1.43, not currently available on Ubuntu Xenial / CentOS 7.
Change-Id: I839e03080104c391fe18185b9544c9df43c114e6
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
Change-Id: I9c1e97144ffd46040d32a0edf8253fc393b73c89
Signed-off-by: Michael Polenchuk <mpolenchuk@mirantis.com>
|
|
- reclass: iec: CentOS compatibility changes:
* drop `proto: static` in favor of letting the linux formula set
the appropiate default based on target OS;
* replace `proto: manual` with `proto: none` on RHEL systems;
* system.file: Avoid using non-existing `shadow` group for system
files;
* load br_netfilter kernel module to avoid `linux.network` state
failures;
* disable `at`, `cron` due to incomplete defaults in
salt-formula-linux (since we don't use them on iec nodes anyway);
- jumpserver/VCP VMs: centos: enable predictable interface names:
* CentOS cloud image defaults to old 'eth' naming scheme;
* add necessary kernel boot options via linux state;
* cleanup auto-generated udev rules for old eth interface names;
- salt-formula-linux: network: RHEL: Set bridge for member interfaces
* Find the bridge containing the interface being currently
configured (if any) and pass it to the `network.managed` Salt call;
- deploy.sh: Add new deploy argument `-o` for specifying the operating
system to preinstall on jumpserver and/or VCP VMs;
* defaults to 'ubuntu1604';
* only iec scenarios will also support 'centos' for now;
- user-data: minor tweaks for CentOS compatability:
* use `systemctl` instead of `service` utility;
* explicitly enable `salt-minion` service, since it defaults to
disabled on RHEL systems;
* explicitly call `ldconfig` to work around stale cache on RHEL,
preventing `salt-minion` from using OpenSSL library;
- states: virtual_init: Skip non-existing sysctl options on CentOS:
* CentOS currently uses a 3.x kernel which lacks certain sysctl
options that were only introduced in 4.x kernels, so skip them;
- state: akraino_iec: Add centos support:
* move iec repo to `/var/lib/akraino/iec` on both Salt Master and
cluster nodes;
- scenario defaults: Add CentOS configuration:
* OS-dependent configuration split;
* CentOS base image, default packages etc.;
- AArch64 deploy requirements: Add `xz` dependency
* CentOS AArch64 cloud image is archived using xz, install xz tools
for decompression;
- xdf_data: Make yaml parsing OS agnostic:
* rename `apt` to `repo` where appropiate;
* OS-dependent configuration parsing;
- lib_jump_deploy: CentOS handling changes:
* skip filesystem resize of cloud image for CentOS;
* add repo handling, package intallation/removal handling for CentOS;
* unxz base image if necessary (CentOS AArch64 cloud image);
Change-Id: Ic3538bacd53198701ff4ef77db62218eabc662e7
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
When multiple installers are used on the same jumpserver, it is
useful to have the ability of automatic cleanup after a previous
deploy.
Change-Id: Ib3249f53ee9d6b1ba2409dd71bd13480536faedc
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
|
|
Updated the documentation for the Hunter release plus one minor
change of wording in the deploy script as we no longer install
just Openstack
Change-Id: I853f5536b0f4a89a8c20af0a9650372690ef7c99
Signed-off-by: Cristina Pauna <cristina.pauna@enea.com>
|
|
NOTE: only os-nosdn-nofeature-noha is parameterized for now.
- move config drive & disk creation from prepare_vms to create_vms;
- make default disk size(s) configurable based on scenario defaults
and vPDF;
* compute nodes require 2 disks to be defined in vPDF, since the
pillar reclass model assumes /dev/vdb is reserved for cinder;
* if multiple disks are defined in vPDF, they are created and
attached accordinly (only ctl01 and cmp nodes are parameterized
in this change; only for the os-nosdn-nofeature-noha scenario);
- vCPU specifications are deduced based on vPDF (sockets, cores);
* threads/core is hard set to 2 since vPDF does not have a key
for it;
* NUMA resources are distributed evenly based on the number of
sockets configured in PDF;
* no less than the mininum requirement for a scenario is allocated
(e.g. if PDF specifies 2 cores, but the scenario requires at
least 4 cores, the larger value will be used);
- RAM is deduced based on PDF (but no less than the mininum req is
allocated, e.g. if PDF specifies 2GB RAM for computes, but the
scenario requires at least 8GB, the larger value will be used);
Change-Id: I97188aa2a1006865b8429eb6483e10c76795f7d2
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
- replace mas01 VM with a Docker container;
- drop `mcpcontrol` virsh-managed network, including special handling
previously required for it across all scripts;
- drop infrastructure VMs handling from scripts, the only VMs we still
handle are cluster VMs for virtual and/or hybrid deployments;
- drop SSH server from mas01;
- stop running linux state on mas01, as all prerequisites are properly
handled durin Docker build or via entrypoint.sh - for completeness,
we still keep pillar data in sync with the actual contents of mas01
configuration, so running the state manually would still work;
- make port 5240 available on the jumpserver for MaaS dashboard access;
- docs: update diagrams and text to reflect the new changes;
Change-Id: I6d9424995e9a90c530fd7577edf401d552bab929
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
Extend one of the existing deployment arguments to allow the
installation of only the operating system and infrastructure networks,
skipping cloud setup.
Change-Id: Ibc5d0f324ed15b66f809839cfce49a0324b6fe4d
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
Outside OPNFV Jenkins (i.e. when manually cloning the OPNFV Fuel
repo and starting a deploy), the Docker tag used to default to
'latest' unless the user specifically set it to 'gambia'.
Rely on 'defaultbranch' setting in .gitreview to determine the
appropiate Docker tag.
Change-Id: I7e6b0706597d84d7cd5dc077499da78031aa61af
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
- s/Fuel@OPNFV/OPNFV Fuel/g;
- added README files for ci/scenarios/patches directories;
- refresh & simplify cluster overview diagrams;
- unify labels across docs;
- fix TOC numbering;
- remove local labs PDF/IDF files, as they are merely duplicates of
Pharos files included as a git submodule;
JIRA: FUEL-397
Change-Id: I87f61938eeb67f13fd9205d5226a30f02e55d267
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
_param:aodh_version was lost during a recent refactor, bring it back.
While at it, also make chown in entrypoint.sh recursive to prepare
for non-sudo deployments.
Fixes: c0de0902
Change-Id: I41b225c4a3f15269aa156a1c33412206beff6ee9
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
lib.sh got pretty big over time, making it hard to maintain.
Since most of the functions defined now in lib.sh are only required
during build/deploy and not in state files, move them to a new file.
While at it, prepare for running build/deploy as non-root and
set a default connection string for virsh instead of using
user specific config in ~/.config/libvirt/libvirt.conf, which
caused end user experience issues in the past.
Change-Id: Id8c2a8139e4bfdb99af2b0fad73b911ffa18ebea
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
Change-Id: Iaa60008e234a506b05e7b5249c85fd1d6fa63d56
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
|
|
`virtual_init` state file tries to ping all FN VMs, but that won't
work on hybrid PODs since all FN VMs but mas01 require MaaS DHCP to
be already configured (i.e. FN VMs in question will be reset after
mas01 is fully configured).
Limit virtual node queries in `virtual_init` to mas01 VM, as the rest
of FN VMs will be handled via `baremetal_init` state.
While at it, move _param:apt_mk_version def to common reclass to
avoid an undef reference in NOHA hybrid deployments; set MCP_VCP to
0 for non-HA scenarios.
JIRA: FUEL-385
Change-Id: I582bca6864e9bfed23baf26f9b66e6e95e986c58
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
Bring back MAAS_IP global env var and use it for mas01 VM IP addr
in mcpcontrol network to prevent salt minion signature change.
Partially-reverts: b666bc50
Change-Id: I5c7668393fe66287bd3ecdc75dd3195d5a89a8f3
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
|
|
|
|
Change-Id: I55a3c10f275079b11b7456b28a2c846cb33c204a
Signed-off-by: Michael Polenchuk <mpolenchuk@mirantis.com>
|
|
Restart docker service to refresh FORWARD chain
and insert docker related rules on top.
Change-Id: I971840f5979636c4ea8ae4d66a82982c24aa5f66
Signed-off-by: Michael Polenchuk <mpolenchuk@mirantis.com>
|
|
Since we switched to dockerized Salt master and mounting the full
contents of the current git repo inside cfg01's /root/fuel dir,
exposing temporary deploy artifacts should be avoided, i.e. the
default temporary storage location (configurable via '-S' deploy arg)
should be moved outside the git repo.
JIRA: FUEL-383
Change-Id: I32f7197b018fd853867de42f5618650ac9022dc9
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
While at it, retire obsolete MAAS_IP global variable and let mas01
VM get a DHCP address from virsh-managed mcpcontrol network.
Change-Id: Ifd85dbcab10894a5d0d675d37f0c35f09776d9b4
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
|
|
|
|
Install in a local directory a newer version of virt-manager
to workaround obsolete Ubuntu versions lacking --cpu cellN.* support.
This change only affects CPU cfg of virtual compute nodes in
nosdn-nofeature-noha scenarios with:
- set default cpu_topology to dual socket (2 cores, 2 sockets,
2 threads);
- bump default RAM to 16GB;
- define 2 NUMA cells, each with half the resources;
To keep the old behavior available (single socket), a new deploy
argument has been added (`-m`). The RAM change is not configurable
via deploy args.
NOTE: The CPU topology for virtual nodes should later be read from
PDF instead of hardcoding it on a per-scenario basis in the installer.
NOTE: Default 'ram' unit is MiB, while cellN.memory default unit is
'KiB'.
JIRA: FUEL-385
Change-Id: I7ca268b0a2052524cb7187a5cf9b6fa8a382c9f9
Signed-off-by: Dimitrios Markou <mardim@intracom-telecom.com>
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
Allow skipping docker pull for verify jobs by setting the new env
var to 'verify'.
JIRA: FUEL-383
Change-Id: If8e2f66b5ccdac5c3911eeabfc2ba9c0eba61093
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
Drop duplicate maas:machines definitions which could cause conflicts
in rare corner cases.
Slightly refactor j2 template expansion to make `conf.virtual.nodes`
available during first stage.
Change-Id: I04d56e346b12c6eb97da5c0c0ab1e3446e5fc1b8
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
* refactor & extend `-f` deploy argument;
* retire INFRA_CREATION_ONLY env var, duplicated by
NO_DEPLOY_ENVIRONMENT;
* `-F` and `-e` deploy arguments are now equivalent;
JIRA: FUEL-383
Change-Id: Ifc1527fa1e7d7486d1b7600772e2c5de34b1e52c
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
* Refactor OPNFV salt-formulas mechanism to resemble upstream git
structure:
- git submodules: add new submodule for each formula we patch;
- create salt-formula-x directories for OPNFV formulas;
- move mcp/metadata/service contents to their each formula subdir;
- use `make patches-import` for patches previously handled by
patch.sh;
- retire patch.sh
* states: add virtual_init:
- mostly based on old salt.sh, which is now obsolete;
- exclude salt-master service restart (it would kill the container);
* scenarios: cleanup (rm cfg01 virtual node def), adopt virtual_init;
* reclass: align our model with prebuilt container's Salt config:
- drop linux:network pillar data (handled by Docker);
- stop applying linux.system state on cfg01;
- align salt user homedir;
- drop salt-formula packages (preprovisioned);
* minor plumbing in deploy.sh and lib.sh;
JIRA: FUEL-383
Change-Id: I28708a9b399d3f19012212c71966ebda9d6fc0ac
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
JIRA: FUEL-383
Change-Id: I19d27ca59a3f24d1bd66e39457a6ca267bccce19
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
Add support for different prerequisites depending on the current
operation (docker build or cluster deploy).
Leverage the new support to pre-install upcoming deps:
- python-pip (build);
- docker-compose (deploy);
JIRA: FUEL-383
Change-Id: Ic3e6062b1943e3584f0b1f80d2e33b8812defced
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
Upcoming support for containerized Salt Master node requires access
to certain variables during j2 interpolation, so also export:
- MCP_STORAGE_DIR;
- MCP_REPO_ROOT_PATH;
JIRA: FUEL-383
Change-Id: I584c0bf8133b5ae6178d97da5b44d345e45a0222
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
By default, vnet devices have a MTU of 1500 on the host side, causing
issue with larger packets traversing the bridges between guest VMs
when guest VMs have jumbo frames enabled.
JIRA: FUEL-336
JIRA: FUEL-367
JIRA: FUEL-382
[1] http://linuxaleph.blogspot.com/2013/01/
how-to-network-jumbo-frames-to-kvm-guest.html
[2] https://packetpushers.net/udev/
Change-Id: I941ac9cf764e3b3fa2d6463be5363b5459775f29
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
For hybrid PODs (e.g. x86_64 jumpserver + control nodes, aarch64
baremetal compute nodes), the virtual nodes rely on MaaS DHCP to be
up when the OS boots, so issue a `virsh reset` accordingly.
Instead of checking for online nodes using `test.ping`, use
`saltutil.sync_all` to also sync Salt state modules to the virtual
nodes (usually handled by baremetal_init state in HA deploys).
JIRA: FUEL-338
Change-Id: If689d057dc4438102c3a7428a97b9638e21bfdc5
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
Previously, all virtual deployments relied on lab configuration from
<./mcp/config> using 'local-virtual1' as POD name.
To keep using that default sample config, one can still pass the
appropiate arguments to <ci/deploy.sh> as in:
$ ./ci/deploy.sh -l local -p virtual1 [...]
This changed introduces support for vPOD-specific lab config, e.g.:
$ ./ci/deploy.sh -l ericsson -p virtual4
Change-Id: I6ecb8e71d7e923e4b29b4ab0693c1afa5e19b582
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
Change-Id: If77ac85fa86e0a1a18c0cc2abff77d876cdb9e93
Signed-off-by: Michael Polenchuk <mpolenchuk@mirantis.com>
|
|
Instead of duplicating scenarios for NOVCP, allow it to be specified
using a new deploy argument, `-N`.
Things are getting convoluted, so instead of creating dedicated
'*_pdf.yml.j2' files for each group of similar features, apply the
templating in-place and rename all affected files to ".yml.j2".
This breaks .gitignore assumption of hiding only "*_pdf.yml" files,
so extend (manually) the <mcp/reclass/classes/cluster/.gitignore>
with `git ls-files --exclude-standard -o` after an expansion.
- ha: move nfv.cpu_pinning to j2, conditioned by 'baremetal';
- ha: add cmp00* vnode definitions (hugepages need more RAM);
- labs/local: enable hugepage params for non-dpdk noha;
- salt.sh: add route_wrapper to all non-infra VMs;
This change extends novcp support to all HA scenarios.
Change-Id: I7a80415ac33367ab227ececb4ffb1bc026546d36
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
j2/python is easier to read and manipulate strings, although it does
need some special care about undefined dict keys.
With this in place, deploy.sh only contains the higher level logic for
the deployment process.
- merge arch-specific default configuration files into a singular
file with arch name as main dict key of old config (also avoids
creating duplicate 'virtual' YAML keys in $LOCAL_PDF);
- move template handling to separate <lib_template.sh>;
- decouple tight bash ordering of scenario expansion -> parse_yaml ->
variable export (e.g. CLUSTER_DOMAIN) -> re-use in cluster j2s;
however we can't parse *all* j2s in one go, as scenario j2s might
expand to YAMLs needed while expanding cluster j2;
- split `do_templates` into separate functions for each stage, with
no coupling between them other then call order;
Change-Id: I4b5e804094c00e5e918caf769fd85fa52181ad76
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
Change-Id: I687b73b256aca78c9d41d4bcd49bfbde51278b51
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
Change-Id: Ie4d8e70866d533d195a6e80cdfecbdb00a3027ce
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
Drop one questionable patch responsible for MaaS node authorized
keys to include mcp.rsa.pub by reading the contents of authorized
keys on mas01, assuming mcp.rsa.pub will be on the first line.
Instead, export the contents of the public key using a shell env
var during deploy, which gets expanded via maas_pdf j2 template
into a reclass param, leveraging existing salt-formula-maas sshprefs
mechanism for delivering the key to MaaS.
Since we require the public key to exist before expanding templates,
move `generate_ssh_key` call outside the current infrastructure
handling block, allowing it to execute during all `deploy.sh` calls,
even for dry-runs.
Change-Id: I0f53b0f764a2fafd292e0ffd399c284acf61bd30
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
- MaaS requires PXE/admin to be a Linux bridge;
- if virtual nodes are present, they should be hooked to a proper
Linux bridge for the Public network, but only throw a warning if
not (and create a mock public virsh network instead);
- if both virtual and baremetal nodes are present, Public bridge is
indirectly mandatory (we can't mock it);
JIRA: FUEL-339
Change-Id: Idfe99d66c49eadc56cb3d94ca4db3467fb76d388
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
Instead of classifying scenarios by underlying machine type, switch
to HA/NOHA differentiantion only.
This allows us to add support for hybrid scenarios (with some virtual
and some baremetal nodes in the same cluster).
To facilitate this, we will template the scenario files, which is a
small step towards SDF (Scenario Descriptor File) definition and
adoption later.
JIRA: FUEL-338
Change-Id: If5787991869a3105d82c27ffa0a86ac79b4b08ba
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
- add missing network definitions for ODL node's 1st interface;
- add missing comments for `notify` global functions;
- fix or silence shellcheck issues;
JIRA: FUEL-322
Change-Id: Ie3341d29ab12ddf432db603ad865259afb54714e
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
- add new virsh managed network 'pxebr' (to mimic baremetal behavior
on virtual PODs, this will be the equivalent of PXE/admin network);
- connect 'pxebr' to 3rd interface for cfg01, mas01 for all deploys
(used to be baremetal-specific), replacing 'internal';
- keep 'mcpcontrol' connected only to 'cfg01' (+ 'mas01' if present)
for initial infrastructure bring-up (1st interface);
- switch all virtual cluster nodes to 'pxebr' (1st interface);
- use 'pxebr' for all Salt cluster nodes traffic, 'mcpcontrol' only
for mas01<=>cfg01 Salt traffic;
- convert <user-data.template> to jinja2 and expand it based on PDF
instead of using `envsubst`;
- split <user-data.sh.j2> into two versions, one for each network
used for Salt traffic;
- ci/deploy.sh: Read scenario data before template parsing for
cluster domain variable, needed in virsh network def;
- leave docs diagram refresh to later after all possible deploy types
have settled;
- limit keyserver proxy usage to nodes where the configured http proxy
matches the first nameserver (true for all MaaS-provisioned nodes),
so we can re-use the same pillar for FN VMs and baremetal nodes;
- add PXE/admin IP on cfg01's 3rd interface and switch other vnodes
`salt_master_host` to point to it;
JIRA: FUEL-322
Change-Id: Ie4f7aedddf2ef81046f1127b377d88dce79f0fda
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
- apply `linux` state on cfg01 first, so PXE/admin IP is added and
FN VM minions are available;
- add barrier and wait for all FN VMs to register with cfg01;
- use batch-mode execution while applying `linux.network` on FN VMs;
- retry all states executed via <salt.sh> on FN VMs;
JIRA: FUEL-310
Change-Id: I72e1c565370072500df1d486fe76e6315f583c75
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
- move bash template handling (previously expanded via `envsubst`)
to lib.sh;
- move j2 template handling to lib.sh;
- move virsh network templates to 'mcp/scripts/virsh_net' subdir;
- switch virsh network templates from `envsubst` expansion to j2 and
leverage generate_config.py, similar to PDF Fuel installer adapter;
- add relevant runtime env vars (e.g. SALT_MASTER, MAAS_IP) on the fly
to PDF, to consume them in templates like params coming from PDF;
- parameterize virsh network definitions based on PDF (mgmt, public);
JIRA: FUEL-322
Change-Id: Ib94e78fc4f25797b9354a0552e884104da5d0003
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
Extend `notify` to 4 variants:
- notify_i = inline (no newline) colored output;
- notify = `notify_i` + trailing '\n';
- notify_n = `notify` + extra '\n' before and after;
- notify_e = `notify` + stderr output + exit;
This allows us to remove '\n' and cleanup the code a bit.
While at it, fix some 'NOTE' messages going to stderr instead of
stdout.
Change-Id: I682e3344ae9e307c4a68ab31c7766bc91b12ee58
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
- hard requiremenet of PDF/IDF configuration for all deployments;
- expand j2 templates for virtual deploys too;
Since until now we used the same model for *all* virtual PODs, one
of the PDF/IDF sets for existing vPODs (e.g. ericsson-virtual3) can
be re-used practically on any host, without defining new vPODs.
JIRA: FUEL-322
Change-Id: Iac6aab91b6958d0e5e175ed142da6aafadc6fac6
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|