summaryrefslogtreecommitdiffstats
path: root/mcp/reclass/classes/cluster/mcp-pike-ovs-novcp-ha
AgeCommit message (Collapse)AuthorFilesLines
2018-03-07[novcp] Add deploy argument `-N` (experimental)Alexandru Avadanii13-223/+0
Instead of duplicating scenarios for NOVCP, allow it to be specified using a new deploy argument, `-N`. Things are getting convoluted, so instead of creating dedicated '*_pdf.yml.j2' files for each group of similar features, apply the templating in-place and rename all affected files to ".yml.j2". This breaks .gitignore assumption of hiding only "*_pdf.yml" files, so extend (manually) the <mcp/reclass/classes/cluster/.gitignore> with `git ls-files --exclude-standard -o` after an expansion. - ha: move nfv.cpu_pinning to j2, conditioned by 'baremetal'; - ha: add cmp00* vnode definitions (hugepages need more RAM); - labs/local: enable hugepage params for non-dpdk noha; - salt.sh: add route_wrapper to all non-infra VMs; This change extends novcp support to all HA scenarios. Change-Id: I7a80415ac33367ab227ececb4ffb1bc026546d36 Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
2018-02-25[ovs/dpdk] Parameterize node-specific compute argsCristina Pauna2-5/+1
- node-specific parameters (nova pinning, hugepages, dpdk) should be configurable via IDF, on a per-node basis; - keep default settings for lf-pod2, with and without DPDK, override them for virtual deploys via local-virtual1 IDF; - leave neutron_tenant_* vars hardcoded for now, as they are required on both ctl and cmp nodes - this way we'll deal stricly with cmp params, so we can nicely pass them via config.yml to reclass per-node (and not per-role), allowing mixed computes later; - add compute params for ovs/odl-noha, preparing them for deployment on baremetal later. JIRA: ARMBAND-343 Change-Id: I89a58b9565679ab3882d85f07ae817690ae85c67 Signed-off-by: Cristina Pauna <cristina.pauna@enea.com>
2018-02-06Add NOVCP HA OVS scenario (baremetal, virtual)Alexandru Avadanii13-0/+227
Add a new class of scenarios, based on existing baremetal HA scenarios, but instead of having a virtualized control plane (VCP), all Openstack controller services will run directly on the cluster nodes. This change adds the common scaffolding, as well as the OVS scenario. The new scenario(s) can be used on full-baremetal clusters, soon on full-virtual clusters and later on hybrid (virt + bare) clusters. This change defines old (current) style scenario definitions for both baremetal and virtual, both named: - os-nosdn-nofeature-novcp-ha; Prerequisites: 1. Merge-able by name reclass.storage.node definitions Each cluster (e.g. database, telemetry) adds its own set of reclass storage node defitions, which for novcp scenarios should be merged into a single node (kvm) based on the 'name' property. This is not currently supported by upstream reclass 'node.sls' high state, so add support for it via an early patch (required before salt-master-init.sh tries to handle reclass.storage). 2. common reclass classes for novcp Some of the classes in `baremetal-...-common-ha` are not fit for novcp as they define VCP-specific config/inheritance, so add new versions of said classes with novcp in mind or adapt old classes: - parameterize ctl hostname in `openstack_compute.yml`; - new `openstack_control_novcp.yml`; - new `openstack_init_novcp.yml`; 3. Handle hard set names in state files for baremetal nodes Some of our state files (e.g. maas) hardcodes baremetal node names to 'kvm', 'cmp', so we need to align the names in novcp scenario with these values to re-use the maas state. As a future improvement we should parameterize these names in all state files. As a consequence, our baremetal controller nodes will also use 'kvm*' hostnames (instead of 'ctl*'). 4. Add 'noifupdown' to all interfaces on kvm nodes to prevent duplicate IPs/routes created at *any* ifup due to /etc/network/route-br-ex. Patch salt-formula-linux to skip network restart on 'noifupdown', also when routes are present on that interface. JIRA: FUEL-310 Change-Id: Ic67778f63e5ee0334dbfe9547c7109ec1a938d61 Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>