summaryrefslogtreecommitdiffstats
path: root/docs/_static/images/opnfvplatformgraphic.png
diff options
context:
space:
mode:
authorAlexandru Avadanii <Alexandru.Avadanii@enea.com>2018-02-01 00:28:17 +0100
committerGerrit Code Review <gerrit@opnfv.org>2018-02-05 13:16:07 +0000
commit0967b25c30a84b865830538497e9a4cb638bd3f8 (patch)
treee99093e063e03b4609f9131c22cf851e2994fc5e /docs/_static/images/opnfvplatformgraphic.png
parent5870c9f95f0f80b5f0f8c4879b183562d82289de (diff)
Update git submodules
* Update docs/submodules/fuel from branch 'master' - [baremetal] Rename all to drop baremetal prefix A few things differ between baremetal and virtual nodes: - provisioning method; - network setup; Since now we support completely dynamic network config based on PDF + IDF, as well as dynamic provisioning of VMs on jumpserver (as virtual cluster nodes), respectively MaaS-driven baremetal provisioning, let's drop the 'baremetal-' prefix from cluster model names and prepare for unified scenarios. Note that some limitations still apply, e.g. virtual nodes are spawned only on jumpserver (localhost) for now. JIRA: FUEL-310 Change-Id: If20077ac37c6f15961468abc58db7e16f2c29260 Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com> - [virtual] PDF-based network defs for cluster nodes Decouple virtual cluster nodes (ctl, gtw etc.) from opnfv_fn_* vars in favor of parsing PDF/IDF. This is the first step towards unifying baremetal and virtual network definition templates, as well as allowing virtual nodes to run on a remote hypervisor (and eventually with a different arch). opnfv_fn_* vars will still be used for infra VMs spawned on FN (cfg01 and optionally mas01). Adopt new 'net_map.j2' from Pharos submodule for new templates (virt), as well as old ones (baremetal). JIRA: FUEL-322 Change-Id: I150c2416566bbe42ea11cd00f12a8a7bf96776c2 Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com> - [virtual] Parameterize cluster model based on PDF - 10.1.0.0/24 (internal): * 10.1.0.101 -> opnfv_openstack_compute_node01_tenant_address * 10.1.0.124 -> opnfv_openstack_gateway_node01_tenant_address - 172.16.10.0/24 (mgmt): * 172.16.10.11 -> opnfv_openstack_control_node01_address * 172.16.10.100 -> opnfv_infra_config_address * 172.16.10.101 -> opnfv_openstack_compute_node01_control_address * 172.16.10.111 -> opnfv_opendaylight_server_node01_single_address * 172.16.10.124 -> opnfv_openstack_gateway_node01_address - 10.16.0.0/24 (public): * 10.16.0.11 -> opnfv_openstack_control_node01_external_address * 10.16.0.101 -> opnfv_openstack_compute_node01_external_address * 10.16.0.124 -> opnfv_openstack_gateway_node01_external_address To re-use DPDK config baremetal template, move: - cluster.baremetal-mcp-pike-ovs-dpdk-ha.infra.config_pdf + cluster.all-mcp-arch-common.infra.config_dpdk_pdf Drop unused 'ceilometer_graphite_publisher_host' (172.16.10.107). JIRA: FUEL-322 Change-Id: I3aef3415bd696a7ae5b566af12af4733a50c2135 Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com> - [virtual] Change IP addrs to align with baremetal To be able to re-use pod_config.yaml parameters generated based on PDF for both baremetal and virtual scenarios without forking it, we first need to align the IP addresses used in virtual deployments. Currently hard set values will be parameterized in an ulterior change. - 10.1.0.0/24 (internal): * 105 -> 101 (cmp01); 106 -> 102 (cmp02); * 110 -> 124 (gtw01); - 172.16.10.0/24 (mgmt): * 101 -> 11 (ctl01); * 105 -> 101 (cmp01); 106 -> 102 (cmp02); * 110 -> 124 (gtw01); - 10.16.0.0/24 (public): * 101 -> 11 (ctl01); * 105 -> 101 (cmp01); 106 -> 102 (cmp02); * 110 -> 124 (gtw01); JIRA: FUEL-322 Change-Id: I5d5def4e92c3462f1a34f73dde65ef7a262a5d62 Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com> - [virtual] Split 'pxebr' from 'mcpcontrol' net - add new virsh managed network 'pxebr' (to mimic baremetal behavior on virtual PODs, this will be the equivalent of PXE/admin network); - connect 'pxebr' to 3rd interface for cfg01, mas01 for all deploys (used to be baremetal-specific), replacing 'internal'; - keep 'mcpcontrol' connected only to 'cfg01' (+ 'mas01' if present) for initial infrastructure bring-up (1st interface); - switch all virtual cluster nodes to 'pxebr' (1st interface); - use 'pxebr' for all Salt cluster nodes traffic, 'mcpcontrol' only for mas01<=>cfg01 Salt traffic; - convert <user-data.template> to jinja2 and expand it based on PDF instead of using `envsubst`; - split <user-data.sh.j2> into two versions, one for each network used for Salt traffic; - ci/deploy.sh: Read scenario data before template parsing for cluster domain variable, needed in virsh network def; - leave docs diagram refresh to later after all possible deploy types have settled; - limit keyserver proxy usage to nodes where the configured http proxy matches the first nameserver (true for all MaaS-provisioned nodes), so we can re-use the same pillar for FN VMs and baremetal nodes; - add PXE/admin IP on cfg01's 3rd interface and switch other vnodes `salt_master_host` to point to it; JIRA: FUEL-322 Change-Id: Ie4f7aedddf2ef81046f1127b377d88dce79f0fda Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com> - [FN VM] Reboot VMs on jump, wait for all online - apply `linux` state on cfg01 first, so PXE/admin IP is added and FN VM minions are available; - add barrier and wait for all FN VMs to register with cfg01; - use batch-mode execution while applying `linux.network` on FN VMs; - retry all states executed via <salt.sh> on FN VMs; JIRA: FUEL-310 Change-Id: I72e1c565370072500df1d486fe76e6315f583c75 Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com> - [PDF] Switch to generate_config, unify templates - move bash template handling (previously expanded via `envsubst`) to lib.sh; - move j2 template handling to lib.sh; - move virsh network templates to 'mcp/scripts/virsh_net' subdir; - switch virsh network templates from `envsubst` expansion to j2 and leverage generate_config.py, similar to PDF Fuel installer adapter; - add relevant runtime env vars (e.g. SALT_MASTER, MAAS_IP) on the fly to PDF, to consume them in templates like params coming from PDF; - parameterize virsh network definitions based on PDF (mgmt, public); JIRA: FUEL-322 Change-Id: Ib94e78fc4f25797b9354a0552e884104da5d0003 Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com> - deploy.sh: Move notify() to globals.sh Extend `notify` to 4 variants: - notify_i = inline (no newline) colored output; - notify = `notify_i` + trailing '\n'; - notify_n = `notify` + extra '\n' before and after; - notify_e = `notify` + stderr output + exit; This allows us to remove '\n' and cleanup the code a bit. While at it, fix some 'NOTE' messages going to stderr instead of stdout. Change-Id: I682e3344ae9e307c4a68ab31c7766bc91b12ee58 Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com> - deploy.sh: Make PDF, IDF mandatory for all deploys - hard requiremenet of PDF/IDF configuration for all deployments; - expand j2 templates for virtual deploys too; Since until now we used the same model for *all* virtual PODs, one of the PDF/IDF sets for existing vPODs (e.g. ericsson-virtual3) can be re-used practically on any host, without defining new vPODs. JIRA: FUEL-322 Change-Id: Iac6aab91b6958d0e5e175ed142da6aafadc6fac6 Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com> - [vPDF] Use local-virtual1, unify pkg requirements Until PDF/IDF land in Pharos for all our virtual PODs, use a common vPDF we already provide as an example to mimic the old hardcoded behavior while leveraging PDF/IDF parameterization. As a consequence, python requirements previously only needed for baremetal should now also be installed for virtual deploys too. JIRA: FUEL-322 Change-Id: Ied1c907275285a9086450a15491ae516a0db1be2 Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com> - [vPDF] Add experimental vPOD lab config JIRA: FUEL-322 Change-Id: I1482badbbbf66b4855faf6daf486520fc71e09b0 Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com> - [baremetal] Retire example pod_config.yaml It is easier to just generate the `pod_config.yaml` file than to maintain it, so let's remove it. While at it, link sample PDF/IDF inside pharos git submodule, so we don't have to pass a different lab-config URI to use the sample. To generate pod_config.yml for the sample PDF/IDF: $ ./ci/deploy.sh -l local -p pod1 -s os-odl-nofeature-ha -d $ cat mcp/deploy/images/pod_config.yml JIRA: FUEL-322 Change-Id: If5898f92ef54bebc31d57f9632959e9093a89250 Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com> - [PDF] pod1: Refresh PDF, IDF examples Sync latest changes from pharos git repo for our sample PDF/IDF: - move net_config from PDF to IDF; - minor cleanup; JIRA: FUEL-322 Change-Id: If6865ac61a4942a1dd5daf7081fd8faa67e0e7bf Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
Diffstat (limited to 'docs/_static/images/opnfvplatformgraphic.png')
0 files changed, 0 insertions, 0 deletions