Age | Commit message (Collapse) | Author | Files | Lines |
|
- reclass: iec: CentOS compatibility changes:
* drop `proto: static` in favor of letting the linux formula set
the appropiate default based on target OS;
* replace `proto: manual` with `proto: none` on RHEL systems;
* system.file: Avoid using non-existing `shadow` group for system
files;
* load br_netfilter kernel module to avoid `linux.network` state
failures;
* disable `at`, `cron` due to incomplete defaults in
salt-formula-linux (since we don't use them on iec nodes anyway);
- jumpserver/VCP VMs: centos: enable predictable interface names:
* CentOS cloud image defaults to old 'eth' naming scheme;
* add necessary kernel boot options via linux state;
* cleanup auto-generated udev rules for old eth interface names;
- salt-formula-linux: network: RHEL: Set bridge for member interfaces
* Find the bridge containing the interface being currently
configured (if any) and pass it to the `network.managed` Salt call;
- deploy.sh: Add new deploy argument `-o` for specifying the operating
system to preinstall on jumpserver and/or VCP VMs;
* defaults to 'ubuntu1604';
* only iec scenarios will also support 'centos' for now;
- user-data: minor tweaks for CentOS compatability:
* use `systemctl` instead of `service` utility;
* explicitly enable `salt-minion` service, since it defaults to
disabled on RHEL systems;
* explicitly call `ldconfig` to work around stale cache on RHEL,
preventing `salt-minion` from using OpenSSL library;
- states: virtual_init: Skip non-existing sysctl options on CentOS:
* CentOS currently uses a 3.x kernel which lacks certain sysctl
options that were only introduced in 4.x kernels, so skip them;
- state: akraino_iec: Add centos support:
* move iec repo to `/var/lib/akraino/iec` on both Salt Master and
cluster nodes;
- scenario defaults: Add CentOS configuration:
* OS-dependent configuration split;
* CentOS base image, default packages etc.;
- AArch64 deploy requirements: Add `xz` dependency
* CentOS AArch64 cloud image is archived using xz, install xz tools
for decompression;
- xdf_data: Make yaml parsing OS agnostic:
* rename `apt` to `repo` where appropiate;
* OS-dependent configuration parsing;
- lib_jump_deploy: CentOS handling changes:
* skip filesystem resize of cloud image for CentOS;
* add repo handling, package intallation/removal handling for CentOS;
* unxz base image if necessary (CentOS AArch64 cloud image);
Change-Id: Ic3538bacd53198701ff4ef77db62218eabc662e7
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
NOTE: only os-nosdn-nofeature-noha is parameterized for now.
- move config drive & disk creation from prepare_vms to create_vms;
- make default disk size(s) configurable based on scenario defaults
and vPDF;
* compute nodes require 2 disks to be defined in vPDF, since the
pillar reclass model assumes /dev/vdb is reserved for cinder;
* if multiple disks are defined in vPDF, they are created and
attached accordinly (only ctl01 and cmp nodes are parameterized
in this change; only for the os-nosdn-nofeature-noha scenario);
- vCPU specifications are deduced based on vPDF (sockets, cores);
* threads/core is hard set to 2 since vPDF does not have a key
for it;
* NUMA resources are distributed evenly based on the number of
sockets configured in PDF;
* no less than the mininum requirement for a scenario is allocated
(e.g. if PDF specifies 2 cores, but the scenario requires at
least 4 cores, the larger value will be used);
- RAM is deduced based on PDF (but no less than the mininum req is
allocated, e.g. if PDF specifies 2GB RAM for computes, but the
scenario requires at least 8GB, the larger value will be used);
Change-Id: I97188aa2a1006865b8429eb6483e10c76795f7d2
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
- bump Pharos git submodule to allow PODs with fewer nodes;
- add `k8-calico-iec-noha` scenario definition for Akraino
IEC basic configuration;
- add `k8-calico-iec-vcp-noha` scenario definition for Akraino
IEC nested (virtualized control plane) configuration;
- add `akraino_iec` state, which will leverage the Akraino IEC
bootstrap scripts from [1];
- replace system.reboot salt call with cmd.run 'reboot' as it's more
reliable;
- use kernel 4.15 for AArch64 K8 IEC scenarios;
NOTE: These scenarios will not be released in OPNFV since don't rely
on Salt formulas but instead of Akraino IEC scripts to install K8s.
[1] https://gerrit.akraino.org/r/#/q/project:iec
Change-Id: I4e538e0563d724cd3fd5c4d462ddc22d0c739402
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
- replace mas01 VM with a Docker container;
- drop `mcpcontrol` virsh-managed network, including special handling
previously required for it across all scripts;
- drop infrastructure VMs handling from scripts, the only VMs we still
handle are cluster VMs for virtual and/or hybrid deployments;
- drop SSH server from mas01;
- stop running linux state on mas01, as all prerequisites are properly
handled durin Docker build or via entrypoint.sh - for completeness,
we still keep pillar data in sync with the actual contents of mas01
configuration, so running the state manually would still work;
- make port 5240 available on the jumpserver for MaaS dashboard access;
- docs: update diagrams and text to reflect the new changes;
Change-Id: I6d9424995e9a90c530fd7577edf401d552bab929
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
Extend one of the existing deployment arguments to allow the
installation of only the operating system and infrastructure networks,
skipping cloud setup.
Change-Id: Ibc5d0f324ed15b66f809839cfce49a0324b6fe4d
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
Install in a local directory a newer version of virt-manager
to workaround obsolete Ubuntu versions lacking --cpu cellN.* support.
This change only affects CPU cfg of virtual compute nodes in
nosdn-nofeature-noha scenarios with:
- set default cpu_topology to dual socket (2 cores, 2 sockets,
2 threads);
- bump default RAM to 16GB;
- define 2 NUMA cells, each with half the resources;
To keep the old behavior available (single socket), a new deploy
argument has been added (`-m`). The RAM change is not configurable
via deploy args.
NOTE: The CPU topology for virtual nodes should later be read from
PDF instead of hardcoding it on a per-scenario basis in the installer.
NOTE: Default 'ram' unit is MiB, while cellN.memory default unit is
'KiB'.
JIRA: FUEL-385
Change-Id: I7ca268b0a2052524cb7187a5cf9b6fa8a382c9f9
Signed-off-by: Dimitrios Markou <mardim@intracom-telecom.com>
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
Make the bulk of scenario files static again by shifting out all
common virtual nodes (mas01) and states (virtual_init, maas etc.)
to default.yaml(.j2).
This allows us to parse scenario-specific data during first j2
expansion, preparing for the new Pharos installer adapter that
relies on `conf.virtual.nodes.control` length to construct the
proper list of MaaS node definitions (kvm{01,02,03} vs {ctl01,gtw01}).
Change-Id: I666ab5bd6bb2a42f98646af51950f6b9fffa0e8b
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
* Refactor OPNFV salt-formulas mechanism to resemble upstream git
structure:
- git submodules: add new submodule for each formula we patch;
- create salt-formula-x directories for OPNFV formulas;
- move mcp/metadata/service contents to their each formula subdir;
- use `make patches-import` for patches previously handled by
patch.sh;
- retire patch.sh
* states: add virtual_init:
- mostly based on old salt.sh, which is now obsolete;
- exclude salt-master service restart (it would kill the container);
* scenarios: cleanup (rm cfg01 virtual node def), adopt virtual_init;
* reclass: align our model with prebuilt container's Salt config:
- drop linux:network pillar data (handled by Docker);
- stop applying linux.system state on cfg01;
- align salt user homedir;
- drop salt-formula packages (preprovisioned);
* minor plumbing in deploy.sh and lib.sh;
JIRA: FUEL-383
Change-Id: I28708a9b399d3f19012212c71966ebda9d6fc0ac
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
Instead of hardcoding the 'kvm' hostnames, use new targeting
mechanism based on scenario-specific node names, preparing for
baremetal noha scenario integration.
JIRA: FUEL-382
Change-Id: If336aa1ac130749e4df7bffaf27a55513dd4f267
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
Split scenario yaml definitions for virtual.nodes based on node
role ('infra', 'control' or 'compute'), to be leveraged later to
contruct node lists based on said role.
This moves the responsability of filtering node names in scenario
files (based on 'virtual' or 'baremetal' type) to xdf_data.sh.j2,
simplifying scenario templates.
By keeping all nodes (both virtual and baremetal) in scenario files,
we can later determine the role (and implicitly the hostname) for a
MaaS-managed node based on its index in the virtual.nodes.control
structure.
JIRA: FUEL-382
Change-Id: I1f83a307631f4166ee1c57ef598c44876b962f97
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
Instead of duplicating scenarios for NOVCP, allow it to be specified
using a new deploy argument, `-N`.
Things are getting convoluted, so instead of creating dedicated
'*_pdf.yml.j2' files for each group of similar features, apply the
templating in-place and rename all affected files to ".yml.j2".
This breaks .gitignore assumption of hiding only "*_pdf.yml" files,
so extend (manually) the <mcp/reclass/classes/cluster/.gitignore>
with `git ls-files --exclude-standard -o` after an expansion.
- ha: move nfv.cpu_pinning to j2, conditioned by 'baremetal';
- ha: add cmp00* vnode definitions (hugepages need more RAM);
- labs/local: enable hugepage params for non-dpdk noha;
- salt.sh: add route_wrapper to all non-infra VMs;
This change extends novcp support to all HA scenarios.
Change-Id: I7a80415ac33367ab227ececb4ffb1bc026546d36
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
j2/python is easier to read and manipulate strings, although it does
need some special care about undefined dict keys.
With this in place, deploy.sh only contains the higher level logic for
the deployment process.
- merge arch-specific default configuration files into a singular
file with arch name as main dict key of old config (also avoids
creating duplicate 'virtual' YAML keys in $LOCAL_PDF);
- move template handling to separate <lib_template.sh>;
- decouple tight bash ordering of scenario expansion -> parse_yaml ->
variable export (e.g. CLUSTER_DOMAIN) -> re-use in cluster j2s;
however we can't parse *all* j2s in one go, as scenario j2s might
expand to YAMLs needed while expanding cluster j2;
- split `do_templates` into separate functions for each stage, with
no coupling between them other then call order;
Change-Id: I4b5e804094c00e5e918caf769fd85fa52181ad76
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
Change-Id: Ie4d8e70866d533d195a6e80cdfecbdb00a3027ce
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|