aboutsummaryrefslogtreecommitdiffstats
path: root/mcp/scripts/lib_jump_deploy.sh
AgeCommit message (Collapse)AuthorFilesLines
2020-01-09baremetal, virtual: Bump kernel to hwe-18.04 (5.0)Alexandru Avadanii1-3/+7
On some aarch64 platforms (e.g. ThunderX 1), lvcreate manifests some spurious timing issues resulting in incomplete/corrupted LVM thin creation and eventually to transaction ID mismatch between userspace and kernel space. This eventually leads to cinder-volume issues, either when creating the thin storage pool (vgroot-pool) and/or when creating the LVs inside said pool. The issue manifests spuriously on Ubuntu Bionic + UCA, so until a working combination of userspace/kernel is found, work around this by bumping the kernel package to hwe-18.04 (kernel 5.0), effectively bypassing the timing issues during volume creation. This affects all cluster machines (both HA and NOHA scenarios, baremetal and virtual, x86_64 and aarch64, baremetal and virtualized nodes). Note: Ubuntu Bionic cloud image partition handling requires e2fsprogs 1.43, not currently available on Ubuntu Xenial / CentOS 7. Change-Id: I839e03080104c391fe18185b9544c9df43c114e6 Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
2019-07-29[deploy] Explicitly set NS for resolvconf in VMsAlexandru Avadanii1-3/+4
With newer Ubuntu distros using netplan and systemd-resolve, we can't rely on /etc/resolv.conf found on the Jumphost being usable inside the guest VMs, so explicitly use the public network DNS servers configured in PDF/IDF. This will enable support for Jumpserver operating systems like Ubuntu 18.04. Change-Id: I0c7e02d5c1b822f809ce818e739c19d0344f39f5 Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
2019-07-22[iec] centos: Preinstall git into cloud imageAlexandru Avadanii1-2/+3
While at it, fix CentOS selinux preconfiguration on x86_64, which was previously limited (incorrectly) to AArch64. Change-Id: I2d6604d3eea2bfc11fdd5dd3aeb4e2c0c3ede4a2 Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
2019-07-01[lib] Limit cloud img partition resize to XenialAlexandru Avadanii1-1/+2
All cloud images except Ubuntu Xenial (CentOS 7, Ubuntu 18.04) already have enough free space on the predefined partitions, so skip the resize to avoid dealing with the newer e2fsprogs required by Ubuntu 18.04. Change-Id: I184590e631c76910e7c3169dc7bee3c5902ebaf1 Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
2019-06-29[virtual] Add Ubuntu 18.04 (Bionic) basic supportAlexandru Avadanii1-0/+18
Support Ubuntu 18.04 for virtual deployments (and implicitly for VCP VMs). Note that MaaS-provisioned systems will require the same changes being applied via curtin templates. Change-Id: I7cbd7e7c4421f6b970ce6ef97c10d269fec5fca3 Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
2019-06-28[iec] Add basic CentOS support (virtual only)Alexandru Avadanii1-17/+52
- reclass: iec: CentOS compatibility changes: * drop `proto: static` in favor of letting the linux formula set the appropiate default based on target OS; * replace `proto: manual` with `proto: none` on RHEL systems; * system.file: Avoid using non-existing `shadow` group for system files; * load br_netfilter kernel module to avoid `linux.network` state failures; * disable `at`, `cron` due to incomplete defaults in salt-formula-linux (since we don't use them on iec nodes anyway); - jumpserver/VCP VMs: centos: enable predictable interface names: * CentOS cloud image defaults to old 'eth' naming scheme; * add necessary kernel boot options via linux state; * cleanup auto-generated udev rules for old eth interface names; - salt-formula-linux: network: RHEL: Set bridge for member interfaces * Find the bridge containing the interface being currently configured (if any) and pass it to the `network.managed` Salt call; - deploy.sh: Add new deploy argument `-o` for specifying the operating system to preinstall on jumpserver and/or VCP VMs; * defaults to 'ubuntu1604'; * only iec scenarios will also support 'centos' for now; - user-data: minor tweaks for CentOS compatability: * use `systemctl` instead of `service` utility; * explicitly enable `salt-minion` service, since it defaults to disabled on RHEL systems; * explicitly call `ldconfig` to work around stale cache on RHEL, preventing `salt-minion` from using OpenSSL library; - states: virtual_init: Skip non-existing sysctl options on CentOS: * CentOS currently uses a 3.x kernel which lacks certain sysctl options that were only introduced in 4.x kernels, so skip them; - state: akraino_iec: Add centos support: * move iec repo to `/var/lib/akraino/iec` on both Salt Master and cluster nodes; - scenario defaults: Add CentOS configuration: * OS-dependent configuration split; * CentOS base image, default packages etc.; - AArch64 deploy requirements: Add `xz` dependency * CentOS AArch64 cloud image is archived using xz, install xz tools for decompression; - xdf_data: Make yaml parsing OS agnostic: * rename `apt` to `repo` where appropiate; * OS-dependent configuration parsing; - lib_jump_deploy: CentOS handling changes: * skip filesystem resize of cloud image for CentOS; * add repo handling, package intallation/removal handling for CentOS; * unxz base image if necessary (CentOS AArch64 cloud image); Change-Id: Ic3538bacd53198701ff4ef77db62218eabc662e7 Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
2019-05-16[lib] Add uninstall/cleanup optionAlexandru Avadanii1-0/+21
When multiple installers are used on the same jumpserver, it is useful to have the ability of automatic cleanup after a previous deploy. Change-Id: Ib3249f53ee9d6b1ba2409dd71bd13480536faedc Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
2019-05-10[maas] Fix permissions on (partial) redeployAlexandru Avadanii1-2/+4
When redeploying a cluster only (keeping the infrastructure containers from a previous deploy), some things need to be adjusted: - /entrypoint.sh exec permission; - /etc/maas uid/gid re-align on new (fresh) deploy; - account for different location of /usr/sbin/tcpdump apparmor profile for CentOS jumpservers; Change-Id: If51db0bc95eff1a497e1df5d457e26a7b902aa5a Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
2019-05-06Merge "[virtual] Parameterize scenarios based on PDF/IDF"Alexandru Avadanii1-25/+23
2019-04-18mcpcontrol: Avoid duplicate ip rulesAlexandru Avadanii1-1/+2
Executing deploy.sh multiple times led to duplicating the ip rules. Change-Id: Iad5886a851970f166996226fa3d115a93113c6db Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
2019-04-15mcpcontrol: policy based routing for INSTALLER_IPAlexandru Avadanii1-1/+2
To bypass Docker 'bridge'-backed network isolation, we previously added an extra routing hop, which broke access from inside the 'mcpcontrol' Docker network (typically 10.20.0.0/24) to its bridge address (10.20.0.1), leading to DNS issues on Salt Master. This change leverages policy based routing to only add the extra routing hop for connections originating from the default Docker bridge network ('docker0'). Note that other Docker networks using the 'bridge' driver are still isolated from 'mcpcontrol'. Fixes: d9b44acb Change-Id: Ib92901c3278ae9b815f28f26d4c26f82bcadacd6 Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
2019-04-11route mcpcontrol via PXE br to bypass isolationAlexandru Avadanii1-1/+2
Recent virsh/Docker network rework changed mcpcontrol (previously a virsh-managed network) into a Docker-controlled network using the 'bridge' driver. As a consequence, Docker now isolates traffic from 'mcpcontrol' network from the default Docker bridge network ('docker0') using iptables rules that check input/output interfaces. Yardstick (and any other Docker container hooked via 'docker0') will not be able to ssh into Salt master due to this isolation. One possible workaround would be to explicitly ACCEPT traffic from 'docker0' going to Salt master. However, this is only properly supported starting with Docker 17.06, while most CI hosts and end users are still using 17.05 or older. In older Docker releases, DOCKER-USER iptables table was not avaiable, so injecting custom iptables and making them persistent is not only complicated, it's also prone to subtle errors. Another way to bypass the iptables rules is to route the packets coming from our new Docker network via another bridge before letting them find their way into 'docker0'. This change adds a new route for the Salt master host (note that MaaS container will not benefit from this) via the PXE bridge on the jumphost (which can be either a real Linux bridge for baremetal deployments or a virsh-managed network); adding one extra network hop for each packet going between our 'mcpcontrol' Docker network and 'docker0', effectively bypassing the Docker-enforced iptables DROP. Change-Id: Id8ac7a638c778887b361c9b64c320664c88f59fd Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
2019-04-08[virtual] Parameterize scenarios based on PDF/IDFAlexandru Avadanii1-25/+23
NOTE: only os-nosdn-nofeature-noha is parameterized for now. - move config drive & disk creation from prepare_vms to create_vms; - make default disk size(s) configurable based on scenario defaults and vPDF; * compute nodes require 2 disks to be defined in vPDF, since the pillar reclass model assumes /dev/vdb is reserved for cinder; * if multiple disks are defined in vPDF, they are created and attached accordinly (only ctl01 and cmp nodes are parameterized in this change; only for the os-nosdn-nofeature-noha scenario); - vCPU specifications are deduced based on vPDF (sockets, cores); * threads/core is hard set to 2 since vPDF does not have a key for it; * NUMA resources are distributed evenly based on the number of sockets configured in PDF; * no less than the mininum requirement for a scenario is allocated (e.g. if PDF specifies 2 cores, but the scenario requires at least 4 cores, the larger value will be used); - RAM is deduced based on PDF (but no less than the mininum req is allocated, e.g. if PDF specifies 2GB RAM for computes, but the scenario requires at least 8GB, the larger value will be used); Change-Id: I97188aa2a1006865b8429eb6483e10c76795f7d2 Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
2019-03-18[lib] nbd: Explicitly map partitionsAlexandru Avadanii1-1/+5
Certain kernels (e.g. 4.4.0-101+ in Ubuntu) no longer automatically ack the partition table update after `kpartx -a /dev/nbdX`, see [1]. To avoid another dependency on `parted` packages, use `partx` from `util-linux`, which is already installed as a dependency of e2fsprogs. [1] https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1743026 Change-Id: Ibd993fe210c1a11814e89a66759568d4d117d613 Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
2019-03-06[lib] Create veths using systemd opnfv-fuel unitsAlexandru Avadanii1-8/+40
Create 2 systemd services on the jumphost that will handle veth pairs creation, respectively adding them to virsh/real bridges. This allows us to set docker containers restart policy to 'always', enabling persistent Salt Master/MaaS containers across jumphost reboots. NOTE: libvirt creates virtual networks async, hence the need for retrying hooking veths to them. JIRA: FUEL-406 Change-Id: I1ca033cb5eb854b577b57bb2387a58bd9605a5bb Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
2019-02-14[baremetal] Containerize MaaSAlexandru Avadanii1-33/+32
- replace mas01 VM with a Docker container; - drop `mcpcontrol` virsh-managed network, including special handling previously required for it across all scripts; - drop infrastructure VMs handling from scripts, the only VMs we still handle are cluster VMs for virtual and/or hybrid deployments; - drop SSH server from mas01; - stop running linux state on mas01, as all prerequisites are properly handled durin Docker build or via entrypoint.sh - for completeness, we still keep pillar data in sync with the actual contents of mas01 configuration, so running the state manually would still work; - make port 5240 available on the jumpserver for MaaS dashboard access; - docs: update diagrams and text to reflect the new changes; Change-Id: I6d9424995e9a90c530fd7577edf401d552bab929 Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
2019-01-16Make shutdown only on physical nodesMichael Polenchuk1-1/+1
Change-Id: If167e7a6bdcdccd6b6df43bd5cac54250abec61a Signed-off-by: Michael Polenchuk <mpolenchuk@mirantis.com>
2019-01-13[centos] Update altarch kernel URLAlexandru Avadanii1-6/+2
CentOS recently moved its kernel source RPM from the altarch subdir to the same directory x86_64 kernel sources used to reside, so update our script accordinly. Change-Id: I88010eabdfc15d6a79350dface29258cc37c4b95 Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
2018-11-29[docker] compose: Switch ip_range to ipv4_addressAlexandru Avadanii1-2/+2
Explicitly set the ipv4_address for each network instead of relying on ip_range allocation, which seems to fail / not be picked up. While at it, use docker-compose 1.22 or newer to bypass slow Docker network creation with 'macvlan' driver [1]. [1] https://github.com/docker/compose/issues/5248 Change-Id: Ic31851522576ebb2407d869b7c3ed7bd06951922 Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
2018-09-28[deploy] Use qemu:///system for virt-inst tooAlexandru Avadanii1-1/+1
Make sure `virsh` and `virt-install` use the same connection URI. Fixes: e49ffac1 Change-Id: I437f063ce9936804248b7cf09f6ecfef6417f387 Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
2018-09-24[lib.sh] Split into multiple files for readabilityAlexandru Avadanii1-0/+476
lib.sh got pretty big over time, making it hard to maintain. Since most of the functions defined now in lib.sh are only required during build/deploy and not in state files, move them to a new file. While at it, prepare for running build/deploy as non-root and set a default connection string for virsh instead of using user specific config in ~/.config/libvirt/libvirt.conf, which caused end user experience issues in the past. Change-Id: Id8c2a8139e4bfdb99af2b0fad73b911ffa18ebea Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>