From 1ce96c4e706c009d96799122140b3ececd6a4b9b Mon Sep 17 00:00:00 2001 From: Bryan Sullivan Date: Wed, 7 Feb 2018 13:22:47 -0800 Subject: Update diagram and readme's JIRA: MODELS-2 Change-Id: Ib7838965698b948ca3c2b24174c6ce85fd90c623 Signed-off-by: Bryan Sullivan --- docs/images/models-k8s.png | Bin 107735 -> 168289 bytes tools/README.md | 41 +++++++++++---- tools/kubernetes/README.md | 124 ++++++++++++++++++++++----------------------- 3 files changed, 91 insertions(+), 74 deletions(-) diff --git a/docs/images/models-k8s.png b/docs/images/models-k8s.png index 107e2bb..221442c 100644 Binary files a/docs/images/models-k8s.png and b/docs/images/models-k8s.png differ diff --git a/tools/README.md b/tools/README.md index 707bd55..5990c07 100644 --- a/tools/README.md +++ b/tools/README.md @@ -4,19 +4,38 @@ .. (c) 2017-2018 AT&T Intellectual Property, Inc --> -This repo contains experimental scripts etc for setting up cloud-native stacks for application deployment and management on bare-metal servers. A lot of cloud-native focus so far has been on public cloud providers (AWS, GCE, Azure) but there aren't many tools and even fewer full-stack open source platforms for setting up bare metal servers with the same types of cloud-native stack features. Further, app modeling methods supported by cloud-native stacks differ substantially. The tools in this repo are intended to help provide a comprehensive, easily deployed set of cloud-native stacks that can be further used for analysis and experimentation on converged app modeling and lifecycle management methods, as well as other purposes, e.g. assessments of efficiency, performance, security, and resilience. +This repo contains experimental scripts etc for setting up cloud-native and hybrid-cloud stacks for application deployment and management on bare-metal servers. The goal of these tools is to support the OPNFV Models project with various implementations of cloud-native and OpenStack-based clouds, as well as hybrid clouds. This will serve as a platform for testing modeled VNF lifecycle management in any one of these cloud types, or in a hybrid cloud environment. + +In the process, this is intended to help developers automate setup of full-featured stacks, to overcome the sometimes complex, out-of-date, incomplete, or unclear directions provided for manual stack setup by the upstream projects. + +The tools in this repo are thus intended to help provide a comprehensive, easily deployed set of cloud-native stacks that can be further used for analysis and experimentation on converged app modeling and lifecycle management methods, as well as other purposes, e.g. assessments of efficiency, performance, security, and resilience. The toolset will eventually include these elements of one or more full-stack platform solutions: -* hardware prerequisite/options guidance -* container-focused application runtime environment, e.g. - * kubernetes - * docker-ce - * rancher +* bare-metal server deployment + * [MAAS](https://maas.io) + * [Bifrost](https://docs.openstack.org/bifrost/latest/) +* application runtime environments, also referred to as Virtual Infrastructure Managers (VIM) using the ETSI NFV terminology + * container-focused (often referred to as "cloud-native", although that term really refers to broader concepts) + * [Kubernetes](https://github.com/kubernetes/kubernetes) + * [Docker-CE (Moby)](https://mobyproject.org/) + * [Rancher](https://rancher.com/) + * VM-focused + * [OpenStack Helm](https://wiki.openstack.org/wiki/Openstack-helm) * software-defined storage backends, e.g. - * ceph -* container networking (CNI) + * [Ceph](https://ceph.com/) +* cluster internal networking + * [Calico CNI](https://github.com/projectcalico/cni-plugin) * app orchestration, e.g. via - * cloudify - * ONAP - * helm + * [Cloudify](https://cloudify.co/) + * [ONAP](https://www.onap.org/) + * [Helm](https://github.com/kubernetes/helm) + * [OpenShift Origin](https://www.openshift.org/) +* monitoring and telemetry + * [OPNFV VES](https://github.com/opnfv/ves) + * [Prometheus](https://prometheus.io/) * applications useful for platform characterization + * [Clearwater IMS](http://www.projectclearwater.org/) + +An overall concept for how cloud-native and OpenStack cloud platforms will be deployable as a hybrid cloud environment, with additional OPNFV features such as VES, is shown below. + +![Hybrid Cloud Cluster](/docs/images/models-k8s.png?raw=true "Resulting Cluster") \ No newline at end of file diff --git a/tools/kubernetes/README.md b/tools/kubernetes/README.md index 31daaaf..49c8945 100644 --- a/tools/kubernetes/README.md +++ b/tools/kubernetes/README.md @@ -1,63 +1,61 @@ - - -This folder contains scripts etc to setup a kubernetes cluster with the following type of environment and components: -* hardware - * 2 or more bare metal servers: may also work with VMs - * two connected networks (public and private): may work if just a single network - * one or more disks on each server: ceph-osd can be setup on an unused disk, or a folder (/ceph) on the host OS disk -* Kubernetes - * single k8s master node - * other k8s cluster worker nodes -* Ceph: backend for persistent volume claims (PVCs) for the k8s cluster, deployed using Helm charts from https://github.com/att/netarbiter -* Helm on k8s master (used for initial cluster deployment only) - * demo helm charts for Helm install verification etc, cloned from https://github.com/kubernetes/charts and modified/tested to work on this cluster -* Prometheus: server on the k8s master, exporters on the k8s workers -* Cloudify CLI and Cloudify Manager with Kubernetes plugin (https://github.com/cloudify-incubator/cloudify-kubernetes-plugin) -* OPNFV VES Collector and Agent -* OPNFV Barometer collectd plugin with libvirt and kafka support -* As many components as possible above will be deployed using k8s charts, managed either through Helm or Cloudify - -A larger goal of this work is to demonstrate hybrid cloud deployment as indicated by the presence of OpenStack nodes in the diagram below. - -Here is an overview of the deployment process, which if desired can be completed via a single script, in about 50 minutes for a four-node k8s cluster of production-grade servers. -* demo_deploy.sh: wrapper for the complete process - * ../maas/deploy.sh: deploys the bare metal host OS (Ubuntu or Centos currently) - * k8s-cluster.sh: deploy k8s cluster - * deploy k8s master - * deploy k8s workers - * deploy helm - * verify operation with a hello world k8s chart (nginx) - * deploy ceph (ceph-helm or on bare metal) and verify basic PVC jobs - * verify operation with a more complex (PVC-dependent) k8s chart (dokuwiki) - * ../prometheus/prometheus-tools.sh: setup prometheus server, exporters on all nodes, and grafana - * ../cloudify/k8s-cloudify.sh: setup cloudify (cli and manager) - * verify kubernetes+ceph+cloudify operation with a PVC-dependent k8s chart deployed thru cloudify - * (VES repo) tools/demo_deploy.sh: deploy OPNFV VES - * deploy VES collector - * deploy influxdb and VES events database - * deploy VES dashboard in grafana (reuse existing grafana above) - * deploy VES agent (OPNFV Barometer "VES Application") - * on each worker, deploy OPNFV Barometer collectd plugin -* when done, these demo elements are available - * Helm-deployed demo app dokuwiki, at the assigned node port on any k8s cluster node (e.g. http://$NODE_IP:$NODE_PORT) - * Cloudify-deployed demo app nginx at http://$k8s_master:$(assigned node port) - * Prometheus UI at http://$k8s_master:9090 - * Grafana dashboards at http://$ves_grafana_host:3000 - * Grafana API at http://$ves_grafana_auth@$ves_grafana_host:3000/api/v1/query?query= - * Kubernetes API at https://$k8s_master:6443/api/v1/ - * Cloudify API at (example): curl -u admin:admin --header 'Tenant: default_tenant' http://$k8s_master/api/v3.1/status - -See comments in [setup script](k8s-cluster.sh) and the other scripts for more info. - -This is a work in progress! - -![Resulting Cluster](/docs/images/models-k8s.png?raw=true "Resulting Cluster") - -The flow for this demo deployment is illustrated below. - -![models_demo_flow.svg](/docs/images/models_demo_flow.svg "models_demo_flow.svg") - + + +This folder contains scripts etc to setup a kubernetes cluster with the following type of environment and components: +* hardware + * 2 or more bare metal servers: may also work with VMs + * two connected networks (public and private): may work if just a single network + * one or more disks on each server: ceph-osd can be setup on an unused disk, or a folder (/ceph) on the host OS disk +* Kubernetes + * single k8s master node + * other k8s cluster worker nodes +* Ceph: backend for persistent volume claims (PVCs) for the k8s cluster, deployed using Helm charts from [netarbiter](https://github.com/att/netarbiter) +* Helm on k8s master (used for initial cluster deployment only) + * demo helm charts for Helm install verification etc, cloned from [kubernetes charts](https://github.com/kubernetes/charts) and modified/tested to work on this cluster +* Prometheus: server on the k8s master, exporters on the k8s workers +* Cloudify CLI and Cloudify Manager with [Kubernetes plugin](https://github.com/cloudify-incubator/cloudify-kubernetes-plugin) +* OPNFV VES Collector and Agent +* OPNFV Barometer collectd plugin with libvirt and kafka support +* As many components as possible above will be deployed using k8s charts, managed either through Helm or Cloudify + +A larger goal of this work is to demonstrate hybrid cloud deployment as indicated by the presence of OpenStack nodes in the diagram below. + +Here is an overview of the deployment process, which if desired can be completed via a single script, in about 50 minutes for a four-node k8s cluster of production-grade servers. +* demo_deploy.sh: wrapper for the complete process + * [/tools/maas/deploy.sh](/tools/maas/deploy.sh): deploys the bare metal host OS (Ubuntu or Centos currently) + * k8s-cluster.sh: deploy k8s cluster + * deploy k8s master + * deploy k8s workers + * deploy helm + * verify operation with a hello world k8s chart (nginx) + * deploy ceph (ceph-helm or on bare metal) and verify basic PVC jobs + * verify operation with a more complex (PVC-dependent) k8s chart (dokuwiki) + * [/tools/prometheus/prometheus-tools.sh](/tools/prometheus/prometheus-tools.sh): setup prometheus server, exporters on all nodes, and grafana + * [/tools/cloudify/k8s-cloudify.sh](/tools/cloudify/k8s-cloudify.sh): setup cloudify (cli and manager) + * verify kubernetes+ceph+cloudify operation with a PVC-dependent k8s chart deployed thru cloudify + * (VES repo) tools/demo_deploy.sh: deploy OPNFV VES + * deploy VES collector + * deploy influxdb and VES events database + * deploy VES dashboard in grafana (reuse existing grafana above) + * deploy VES agent (OPNFV Barometer "VES Application") + * on each worker, deploy OPNFV Barometer collectd plugin +* when done, these demo elements are available, as described in the script output + * Helm-deployed demo app dokuwiki + * Cloudify-deployed demo app nginx + * Prometheus UI + * Grafana dashboards and API + * Kubernetes API + * Cloudify API + +See comments in [setup script](k8s-cluster.sh) and the other scripts for more info. + +See [readme in the folder above](/tools/readme.md) for an illustration of the resulting k8s cluster in a hybrid cloud environment. + +The flow for this demo deployment is illustrated below. + +![models_demo_flow.svg](/docs/images/models_demo_flow.svg "models_demo_flow.svg") + +This is a work in progress! \ No newline at end of file -- cgit 1.2.3-korg