Age | Commit message (Collapse) | Author | Files | Lines |
|
Introduce a simple mechanism that simulates an 'if-arch-then' cond
for reclass models:
- add new <all-mcp-ocata-common> class hierarchy;
- at runtime (via <salt.sh>) make 'all-mcp-ocata-common.arch' point
to 'all-mcp-ocata-common.$(uname -i)' dynamically;
- inherit new 'arch' class in all cluster models;
- factor out current x86_64 default for "salt_control_xenial_image";
- add AArch64 default for param "salt_control_xenial_image";
Change-Id: I3b239b28d0fd1cc2ced8579e2e93b764eb71ffc3
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
In order to avoid SubjectAltName warnings,
bring it into proxy ssl certificate.
Signed-off-by: Michael Polenchuk <mpolenchuk@mirantis.com>
Change-Id: I46fe9697469354bc028039cc1f030baae1ccd7fb
|
|
Change-Id: I34706afbdbcbdaace0b0ae6c2c2e8cb932812d4e
Signed-off-by: Michael Polenchuk <mpolenchuk@mirantis.com>
|
|
Change-Id: I60a5566de43ca58f3f172611c95546b1353f8406
Signed-off-by: Michael Polenchuk <mpolenchuk@mirantis.com>
|
|
Change-Id: I8937f4f676a48c852bece0680da0b559df7c2e7c
Signed-off-by: Michael Polenchuk <mpolenchuk@mirantis.com>
|
|
Virtual node based on cloud ubuntu image
won't register as a minion on salt master.
Change-Id: Ia32eae01a5633042189cdebebcba8043cae61503
Signed-off-by: Michael Polenchuk <mpolenchuk@mirantis.com>
|
|
JIRA: FUEL-274
Change-Id: I2c8161b24cb18a0d1f9dc6fd509ce18af7ea8cf5
Signed-off-by: Michael Polenchuk <mpolenchuk@mirantis.com>
|
|
* add user of "ubuntu" so that functest gets cluster credentials
* reduce cpu resources for vcp nodes in nofeature scenario
* tune salt targets for maas state
* specify ntp servers
Change-Id: I433a1de1cd2c69c6747c62c3359f5485dee3bfa4
Signed-off-by: Michael Polenchuk <mpolenchuk@mirantis.com>
|
|
Change-Id: I01744bd5728d6fc4c8cd3792aee9759434d18645
Signed-off-by: Michael Polenchuk <mpolenchuk@mirantis.com>
|
|
Change-Id: I4baa9940ae14ef6e084fda7169ec43be7cf3f449
Signed-off-by: Michael Polenchuk <mpolenchuk@mirantis.com>
|
|
* shift vcp nodes interfaces since names started from ens2
* add extra salt sync before vcp start up
* run rabbitmq state on 1st node beforehand then the rest
Change-Id: Ic2c174c288a5e89f2f28c0d9aa573340190a61d3
Signed-off-by: Michael Polenchuk <mpolenchuk@mirantis.com>
|
|
Baremetal support introduced a couple of VCP VMs, which have 2
network interfaces:
- primary (ens3 inside x86 VM) - connected to "br-mgmt" bridge on
each kvm node, serves for MaaS DHCP / connection to salt master;
- secondary (ens4 inside x86 VM) - connected to "br-ctl" bridge on
each kvm node, serves for Openstack Management network;
However, the reclass model was configured to use a single IP address
on the primary interface, breaking the connnection to salt master,
while also not connecting the Openstack Management network properly.
Fix this by configuring the primary interface for DHCP, while the
secondary gets a static IP in Openstack Management network.
Change-Id: I9f1d6f080e882bfaae7b5f209bc3c5536826ba06
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
In order to connect to right underlay
bridge, swap interfaces.
Change-Id: I0ae1f50e8d1f3485404bd7e6eea772cab555b313
Signed-off-by: Michael Polenchuk <mpolenchuk@mirantis.com>
|
|
Previously, we hardcoded the fabric name for our 3rd interface
(which serves PXE/DHCP for the target nodes) to "fabric-2",
relying on predictable index numbers to be provided by MaaS based
on the interfaces defined in /etc/network/interfaces.
However, the fabric IDs/names generated by MaaS are not predictable,
and therefore cannot be hardcoded in our reclass model / scripts.
Work around this by:
- adding support for fabric ID deduction based on CIDR matching
during subnet create/update operation in MaaS py module;
- adding support for VLAN DHCP enablement to MaaS py module,
which was previously handled via shell MaaS API operations
from maas/region.sls;
While at it, revert previous commit that disabled network discovery
("MaaS: Disable network discovery"), since it turns out that network
discovery was not the culprit for subnet creation failure, but wrong
fabric numbering.
This reverts commit 8cdf22d1a1bae4694a373873cab4feb6251069b7.
Change-Id: I15fa059004356cb4aaabb38999ea378dd3c0e0bb
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
In case nodes are already powered on and have an IP in the same
range as the new MaaS DHCP one (e.g. from a previous deploy),
MaaS API will reject the subnet creation due to overlapping
addresses. Try to work around this by disabling network discovery.
Change-Id: I70a33c552bf38a7ccbc1bb7e90c21f424f082bc5
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
Without the 'type' parameter set to 'dynamic', MaaS was configured
to reserve the IP range instead of allocating it dynamically.
This led to IP exhaustion warnings in MaaS dashboard, as well as
wrongful IP allocation.
Change-Id: I1f2b90bf4cd2393cfab6d4bc17771cef009701c0
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
Explicitly configure dhcp_interface for mas01, in order to allow
the interface name to be parametrized via "dhcp_interface" _param.
Change-Id: I6a2750adc1941c0aa1f94ac9b39133b5bd2388c6
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
* re-assign ip from interface to bridge
- install bridge utils
- make a reboot straight away after network config
* change image source for vcp
Change-Id: I34506ee161337b5d3a4088cfdf3c082d99ccb695
Signed-off-by: Michael Polenchuk <mpolenchuk@mirantis.com>
|
|
- ci/deploy.sh: fail if default scenario file is missing;
- start by copying reclass/classes/cluster/virtual-mcp-ocata-ovs as
classes/cluster/baremetal-mcp-ocata-ovs;
- add new state (maas) that will handle MaaS configuration;
- Split PXE network in two for baremetal:
* rename old "pxe" virtual network to "mcpcontrol", make it
non-configurable and identical for baremetal/virtual deploys;
* new "pxebr" bridge is dedicated for MaaS fabric network, which
comes with its own DHCP, TFTP etc.;
- Drop hardcoded PXE gateway & static IP for MaaS node, since
"mcpcontrol" remains a NAT-ed virtual network, with its own DHCP;
- Keep internet access available on first interfaces for cfg01/mas01;
- Align MaaS IP addrs (all x.y.z.3), add public IP for easy debug
via MaaS dashboard;
- Add static IP in new network segment (192.168.11.3/24) on MaaS
node's PXE interface;
- Set MaaS PXE interface MTU 1500 (weird network errors with jumbo);
- MaaS node: Add NAT iptables traffic forward from "mcpcontrol" to
"pxebr" interfaces;
- MaaS: Add harcoded lf-pod2 machine info (fixed identation in v6);
- Switch our targeted scenario to HA;
* scenario: s/os-nosdn-nofeature-noha/os-nosdn-nofeature-ha/
- maas region: Use mcp.rsa.pub from ~ubuntu/.ssh/authorized_keys;
- add route for 192.168.11.0/24 via mas01 on cfg01;
- fix race condition on kvm nodes network setup:
* add "noifupdown" support in salt formula for linux.network;
* keep primary eth/br-mgmt unconfigured till reboot;
TODO:
- Read all this info from PDF (Pod Descriptor File) later;
- investigate leftover references to eno2, eth3;
- add public network interfaces config, IPs;
- improve wait conditions for MaaS commision/deploy;
- report upstream breakage in system.single;
Change-Id: Ie8dd584b140991d2bd992acdfe47f5644bf51409
Signed-off-by: Michael Polenchuk <mpolenchuk@mirantis.com>
Signed-off-by: Guillermo Herrero <Guillermo.Herrero@enea.com>
Signed-off-by: Charalampos Kominos <Charalampos.Kominos@enea.com>
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|