Age | Commit message (Collapse) | Author | Files | Lines |
|
Revert "salt: Use apt-mk 'stable' distribution"
Revert "reclass: apt_mk_version: stable"
This reverts commit d1b6119e288a31e015573363ce77790fec8684df.
This reverts commit 4563ea7d62238e8273d840a8d9c6c1e179ca584e.
Change-Id: I383db1f78a087045086096cbc674260b985fd913
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
A few things differ between baremetal and virtual nodes:
- provisioning method;
- network setup;
Since now we support completely dynamic network config based on PDF +
IDF, as well as dynamic provisioning of VMs on jumpserver (as virtual
cluster nodes), respectively MaaS-driven baremetal provisioning, let's
drop the 'baremetal-' prefix from cluster model names and prepare for
unified scenarios.
Note that some limitations still apply, e.g. virtual nodes are spawned
only on jumpserver (localhost) for now.
JIRA: FUEL-310
Change-Id: If20077ac37c6f15961468abc58db7e16f2c29260
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
Decouple virtual cluster nodes (ctl, gtw etc.) from opnfv_fn_* vars
in favor of parsing PDF/IDF.
This is the first step towards unifying baremetal and virtual network
definition templates, as well as allowing virtual nodes to run on a
remote hypervisor (and eventually with a different arch).
opnfv_fn_* vars will still be used for infra VMs spawned on FN (cfg01
and optionally mas01).
Adopt new 'net_map.j2' from Pharos submodule for new templates (virt),
as well as old ones (baremetal).
JIRA: FUEL-322
Change-Id: I150c2416566bbe42ea11cd00f12a8a7bf96776c2
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
- 10.1.0.0/24 (internal):
* 10.1.0.101 -> opnfv_openstack_compute_node01_tenant_address
* 10.1.0.124 -> opnfv_openstack_gateway_node01_tenant_address
- 172.16.10.0/24 (mgmt):
* 172.16.10.11 -> opnfv_openstack_control_node01_address
* 172.16.10.100 -> opnfv_infra_config_address
* 172.16.10.101 -> opnfv_openstack_compute_node01_control_address
* 172.16.10.111 -> opnfv_opendaylight_server_node01_single_address
* 172.16.10.124 -> opnfv_openstack_gateway_node01_address
- 10.16.0.0/24 (public):
* 10.16.0.11 -> opnfv_openstack_control_node01_external_address
* 10.16.0.101 -> opnfv_openstack_compute_node01_external_address
* 10.16.0.124 -> opnfv_openstack_gateway_node01_external_address
To re-use DPDK config baremetal template, move:
- cluster.baremetal-mcp-pike-ovs-dpdk-ha.infra.config_pdf
+ cluster.all-mcp-arch-common.infra.config_dpdk_pdf
Drop unused 'ceilometer_graphite_publisher_host' (172.16.10.107).
JIRA: FUEL-322
Change-Id: I3aef3415bd696a7ae5b566af12af4733a50c2135
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
To be able to re-use pod_config.yaml parameters generated based on
PDF for both baremetal and virtual scenarios without forking it,
we first need to align the IP addresses used in virtual deployments.
Currently hard set values will be parameterized in an ulterior
change.
- 10.1.0.0/24 (internal):
* 105 -> 101 (cmp01); 106 -> 102 (cmp02);
* 110 -> 124 (gtw01);
- 172.16.10.0/24 (mgmt):
* 101 -> 11 (ctl01);
* 105 -> 101 (cmp01); 106 -> 102 (cmp02);
* 110 -> 124 (gtw01);
- 10.16.0.0/24 (public):
* 101 -> 11 (ctl01);
* 105 -> 101 (cmp01); 106 -> 102 (cmp02);
* 110 -> 124 (gtw01);
JIRA: FUEL-322
Change-Id: I5d5def4e92c3462f1a34f73dde65ef7a262a5d62
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
- add new virsh managed network 'pxebr' (to mimic baremetal behavior
on virtual PODs, this will be the equivalent of PXE/admin network);
- connect 'pxebr' to 3rd interface for cfg01, mas01 for all deploys
(used to be baremetal-specific), replacing 'internal';
- keep 'mcpcontrol' connected only to 'cfg01' (+ 'mas01' if present)
for initial infrastructure bring-up (1st interface);
- switch all virtual cluster nodes to 'pxebr' (1st interface);
- use 'pxebr' for all Salt cluster nodes traffic, 'mcpcontrol' only
for mas01<=>cfg01 Salt traffic;
- convert <user-data.template> to jinja2 and expand it based on PDF
instead of using `envsubst`;
- split <user-data.sh.j2> into two versions, one for each network
used for Salt traffic;
- ci/deploy.sh: Read scenario data before template parsing for
cluster domain variable, needed in virsh network def;
- leave docs diagram refresh to later after all possible deploy types
have settled;
- limit keyserver proxy usage to nodes where the configured http proxy
matches the first nameserver (true for all MaaS-provisioned nodes),
so we can re-use the same pillar for FN VMs and baremetal nodes;
- add PXE/admin IP on cfg01's 3rd interface and switch other vnodes
`salt_master_host` to point to it;
JIRA: FUEL-322
Change-Id: Ie4f7aedddf2ef81046f1127b377d88dce79f0fda
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
- apply `linux` state on cfg01 first, so PXE/admin IP is added and
FN VM minions are available;
- add barrier and wait for all FN VMs to register with cfg01;
- use batch-mode execution while applying `linux.network` on FN VMs;
- retry all states executed via <salt.sh> on FN VMs;
JIRA: FUEL-310
Change-Id: I72e1c565370072500df1d486fe76e6315f583c75
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
- move bash template handling (previously expanded via `envsubst`)
to lib.sh;
- move j2 template handling to lib.sh;
- move virsh network templates to 'mcp/scripts/virsh_net' subdir;
- switch virsh network templates from `envsubst` expansion to j2 and
leverage generate_config.py, similar to PDF Fuel installer adapter;
- add relevant runtime env vars (e.g. SALT_MASTER, MAAS_IP) on the fly
to PDF, to consume them in templates like params coming from PDF;
- parameterize virsh network definitions based on PDF (mgmt, public);
JIRA: FUEL-322
Change-Id: Ib94e78fc4f25797b9354a0552e884104da5d0003
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
It is easier to just generate the `pod_config.yaml` file than to
maintain it, so let's remove it.
While at it, link sample PDF/IDF inside pharos git submodule, so we
don't have to pass a different lab-config URI to use the sample.
To generate pod_config.yml for the sample PDF/IDF:
$ ./ci/deploy.sh -l local -p pod1 -s os-odl-nofeature-ha -d
$ cat mcp/deploy/images/pod_config.yml
JIRA: FUEL-322
Change-Id: If5898f92ef54bebc31d57f9632959e9093a89250
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
|
|
This was only affecting pod deployments with
different board models, under the current limited
support:
- 3 KVMs will be same model and have the same NIC names
- 2 Compute nodes will be the same model and have same NIC names
For the computes nodes, br-mesh NIC name was wrong due
to incorrect idf mapping
Change-Id: I9685b35cb23b03be9fc0e6fe16c0712a9ad70e19
Signed-off-by: Guillermo Herrero <guillermo.herrero@enea.com>
|
|
RHEL family virtualization tools reserve 02:00 PCI slot for VGA, even
if 'nographics' is specified when creating the VM (in case the user
wants to later hook a video card, which usually *requires* PCI slot2).
Debian systems do not follow this rule (tested with libvirt 1.x, 2.x,
3.x), hence 1st NIC lands on PCI slot 2 (and get eth name 'ens2').
To align the behavior across all possible jumpserver distros, bring
back the virtio video.
This reverts commit 738f6c3b68d1179de1ff790f9e72c25f10874da4.
Change-Id: Ifd855c12e04aec1ff0ab047b13f8081365741889
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
Bump Pharos git submodule to pick up support for MaaS timeout
parameterization, as well as new IDF for lf-pod2.
Drop arch-specific MaaS timeouts, as they are now configurable
on a per-POD basis.
Sample usage (via IDF):
idf:
fuel:
maas:
# MaaS timeouts (in minutes)
timeout_comissioning: 10
timeout_deploying: 15
Change-Id: I8fafa336b0bc64d705f6c2e40fc3dfb85672fb15
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
Based on Canonical research (https://goo.gl/QJykMa) there is
low-risk of attack for private clouds environments, therefore
turn off the related kernel patches & regain performance back.
Change-Id: I661fa127241e327b07d21a29d58d584997607123
Signed-off-by: Michael Polenchuk <mpolenchuk@mirantis.com>
|
|
Since VCP VMs (created via salt formula) do not have a video
controller defined in their domain XMLs, network devices end on
different PCI slots and hence have different names assigned
(ens2+ vs foundation node VMs, which start with ens3).
To align network interface names for VMs on jumpserver vs kvm nodes,
and reduce confusion, remove the video controller from FN VMs.
This allows some cleanup:
- drop extra AArch64 args from virt-install;
- unify 'opnfv_vcp_vm_*' and 'opnfv_fn_vm_*' variables;
Change-Id: I0d108b00914b3eaaa03b67c652174f8ed4573118
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
* switch ovs/dpdk scenario from vlan to vxlan mode
* force br-ex interface to mitigate race with incorrect state
* remove dpdk packages list (already in upstream)
Change-Id: Ib827cef2d67879fd2a86d286ca2118b22493274d
Signed-off-by: Michael Polenchuk <mpolenchuk@mirantis.com>
|
|
Fixes: 7c79115
Change-Id: I62f52382b297b1aa9cfc37f74f04a00872ead1ef
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
|
|
|
|
- switch from securedlab to pharos as lab-config structure;
- accomodate the move net_config from PDF to IDF in j2 templates;
Change-Id: Ib04e4fb384568a6efd9e78a080857b663521ae88
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
Previous cherry-pick failed to rename 'ocata' to 'pike'.
JIRA: FUEL-317
Change-Id: Ic1a1145e0652f2a7d15980399232631cf3fc5080
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
If upstream proxy is defined in IDF, propagate it to pillar data:
- linux:system:proxy:keyserver:http(s) for cfg01, mas01;
- maas:region:upstream_proxy for mas01;
Sample IDF config:
idf:
fuel:
network:
upstream_proxy:
address: 10.0.2.2
port: 3128
JIRA: FUEL-317
Change-Id: I12be815e1b4564227fb09c20ce06cd71e7d433b6
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
- Remove hardcoded /24 mask
- Use PDF as source for public network, with reclass params:
opnfv_net_public, _mask, _gw, _pool_start, _pool_end
JIRA: FUEL-315
Change-Id: Idf3a4ed8f63f58fa90d9c1dcb7751ef3b1c9bd36
Signed-off-by: Guillermo Herrero <guillermo.herrero@enea.com>
|
|
|
|
|
|
Instead of defining a http proxy for all salt-minion traffic, which
also includes some Openstack API accesses we can't filter (no_proxy
is not yet supported), add & leverage support for proxy configuration
during APT keyserver access / key download.
JIRA: FUEL-331
Change-Id: I9470807633596c610cfafb141b139ddda2ff096b
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
Although we properly filter the PXE/admin interface in the common
openstack_compute_pdf.yml.j2 template and use DHCP instead of manual
setup, we failed to do the same in scenario-specific overrides
(ODL, OVS), so we end up with 'proto: manual' on PXE/admin on cmp
nodes.
The fix is trivial and reuses the mechanism in the common class in
scenario-specific templates (if interface is PXE/admin, use 'DHCP'
instead of 'manual').
This solves the issue of broken connectivity to Salt master after
cmp reboot.
Change-Id: I1953d03343190acb2efcab4412a3d37e130b0ea9
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
Although previous commit d1b6119 changed the first reference of
apt-mk repos to 'stable' from 'nightly', it missed the cluster model.
This fixes redeploys with `-f`, which fail due to conflicts between
already installed 'stable' packages and 'nightly' ones.
Fixes: d1b6119
Change-Id: I854bac86feaaa61da0b68d158e270eec1ee0ccb7
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
- openvswitch 2.8 officially supports kernel versions from 3.10 to 4.12
- ODL baremetal scenario is acting up with floating/public SNAT
flow under hwe edge kernel 4.13
Change-Id: I099d528b3b1c2ea34f8f856cd60f809f90defea6
Signed-off-by: Michael Polenchuk <mpolenchuk@mirantis.com>
|
|
Allow end-users to easily change the MaaS commissioning/deploying
timeouts by simply editing the reclass model.
While at it, use arch-specific values and bump deploy timeout on
AArch64 to 20 minutes instead of 15.
Change-Id: I37ae434ecebdd64effb007baa06c722b1db15c66
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
Balance VM distribution on the 3 kvm nodes, as kvm02 has 4 VCP VMs
while kvm{01,03} have 5 VCP VMs each (without ODL).
Instead of spawning the ODL VCP VM on kvm03, move it to kvm02.
Change-Id: Id03b9453ee7c15cd6785c0bc073a38b87034aede
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
Since Mirantis prebuilt image comes with salt-minion 2016.3 instead
of 2016.11 and upgrading it leads to a hard to break catch-22, use
the Ubuntu cloud archive image we already download for FN VMs and
pre-install:
- a newer kernel (hwe-edge);
- salt-minion (2016.11);
This also implicitly aligns the image handling on AArch64 and x86_64.
Change-Id: I86d1c777449d37bdd0348936a598e3ffe9d265af
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
By default, MaaS formula will install Salt minion 2016.3 via curtin
on physical nodes. 2016.3 does not properly support proxy_host
config option, causing timeouts during `linux.system.repo` SLS apply.
Change-Id: I3d6245f0d4b425170c43b3b62a21ad9acc6cb97e
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
Prepare for decoupling management from public (drop mas01 NAT):
- ctl: change heat URLs to use new management VIP instead of public;
Change-Id: I8e220ee37bd4177c3afd58a9ee401f815d046706
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
Include `openstack_web_public_vip` class for setting up the
old VIP in the public network, use old class for mgmt VIP.
Also change the generic hostname 'prx' to point inside mgmt net.
Change-Id: Iff69394f16ede290d149a26b054a85371f00f8e0
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
Instead of using NAT on the mas01 node for all cluster node outgoing
traffic, use the MaaS built-in proxy for APT traffic to leverage its
caching capabilities too.
Also enable the proxy for salt minions, so they can access public
keyservers et al.
Cleanup public DNS from kvm nodes, interferes with MaaS proxy.
Add example config for global env proxy, but don't enable it:
- default environment settings - /etc/environment (via reclass);
The MaaS proxy will not be used (at least for now) on nodes:
- cfg01;
- mas01;
NOTE: We can't yet drop the maas.pxe_nat state completely, as certain
Openstack services are still accessed via public addresses from ctl
nodes.
JIRA: FUEL-317
JIRA: FUEL-318
Change-Id: I6c5f6872bb94afb838580571080e808bc262fc68
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
When we dropped the default gw via mas01 NAT, we uncovered a bug,
compute nodes do not have the proper public gw set up and used
to reach public network via mas01, slowing everything down.
Add gw similar to prx nodes.
Fixes: d4ab072
Change-Id: I4343c31c376a7a223670cdd623366454396d8d92
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
Ubuntu prefers ipv6 connections therefore in some networks, this
breaks software updates (it does a AAAA DNS lookup before A record
lookups). Let's prefer old style ipv4 connections over the new ipv6 in
order to save some processing and resource utilization.
Based on previous work from [1] (but without /etc/gai.conf, only APT).
[1] https://review.openstack.org/#/c/462502/
JIRA: FUEL-321
Change-Id: Ic3dff3baa1c0be9ac95972557d6a2d26641bfe1b
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
|
|
Salt minion could return 'no response' and cause an
unconfigured state of the vcp node(s), so catch this output after linux
state as well. Also clean up excess route on proxy nodes.
Change-Id: I3183fa09ff41a8f027ee789869bdae0c3962ab8f
Signed-off-by: Michael Polenchuk <mpolenchuk@mirantis.com>
|
|
Change-Id: I87efd87f8ac05ed9b3189e5dba80748e07c86d5d
Signed-off-by: Michael Polenchuk <mpolenchuk@mirantis.com>
|
|
OVN based scenario doesn't require conventional gateway
node since connectivity to external networks and routing
occurs on compute nodes.
Change-Id: I81e0d497170d5ffb067adf13b0e46290525f26a6
Signed-off-by: Michael Polenchuk <mpolenchuk@mirantis.com>
|
|
The proper patches have been merged into upstream (nova/neutron
formulas, system reclass) to use a separate dir for vhost_user sockets.
Change-Id: Iba8d8a9a05c5ab681b5b5ffbea786dca92704c82
Signed-off-by: Michael Polenchuk <mpolenchuk@mirantis.com>
|
|
Updated libvirt formula now supports group
name as an option for unix socket parameter.
Change-Id: I683e38971fe6c939fd09e95b805d611ddc596f28
Signed-off-by: Michael Polenchuk <mpolenchuk@mirantis.com>
|
|
|
|
|
|
|
|
Change-Id: I360dcb675c90b6f20687979ebc493afe6682c821
Signed-off-by: Michael Polenchuk <mpolenchuk@mirantis.com>
|
|
Use PXE/admin network for salt traffic from/to all minions
except cfg01, mas01.
This allows us to drop the route to admin net from cfg01.
Change-Id: Ic2526f1ff77afe5d92ced900971f4c8f78d2d8a2
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
- patch MaaS to default to `DHCP` instead of `AUTO` for physical
PXE interfaces (all IPs will be handed out by MaaS DHCP *inside* the
defined dynamic DHCP IP range);
- reduce range to silence bogus MaaS warning about address exhaustion;
- regenerate pod_config.yml.example to reflect the changes;
- drop `opnfv_infra_maas_pxe_address` (duplicate of
`opnfv_infra_maas_node01_deploy_address`);
- add `opnfv_infra_config_pxe_address` for future usage;
- while at it, fix missing patch copyright;
JIRA: FUEL-316
Change-Id: I81fad333e77f7c8508cd2b2b267c7b39c130e3e1
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|