summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorBryan Sullivan <bryan.sullivan@att.com>2017-12-07 07:02:31 -0800
committerBryan Sullivan <bryan.sullivan@att.com>2017-12-07 07:02:31 -0800
commitc61b1a6fabdd8820f93dedaee244c66da1a0a6e7 (patch)
tree7b3c48137a87955c713730cbb0d08bc927106511
parent0ed35881347dc09ede49d1520f4870a326bf640e (diff)
Update readme
JIRA: MODELS-2 Change-Id: I0b8bbfd8d55bdfe2ffacf87296cb5c64ca5ce3d1 Signed-off-by: Bryan Sullivan <bryan.sullivan@att.com>
-rw-r--r--tools/README.md6
-rw-r--r--tools/kubernetes/README.md32
2 files changed, 33 insertions, 5 deletions
diff --git a/tools/README.md b/tools/README.md
index 16c5b79..a059d3a 100644
--- a/tools/README.md
+++ b/tools/README.md
@@ -1,4 +1,4 @@
-This repo contains experimental scripts etc for setting up cloud-native stacks for application deployment and management on bare-metal servers. A lot of cloud-native focus so far has been on public cloud providers (AWS, GCE, Azure) but there aren't many tools and even fewer full-stack open source platforms for setting up bare metal servers with the same types of cloud-native stack features. This repo is thus a collection of tools in development toward that goal, useful in experimentation, demonstration, and further investigation into characteristics of cloud-native platforms in bare-metal environments, e.g. efficiency, performance, security, and resilience.
+This repo contains experimental scripts etc for setting up cloud-native stacks for application deployment and management on bare-metal servers. A lot of cloud-native focus so far has been on public cloud providers (AWS, GCE, Azure) but there aren't many tools and even fewer full-stack open source platforms for setting up bare metal servers with the same types of cloud-native stack features. Further, app modeling methods supported by cloud-native stacks differ substantially. The tools in this repo are intended to help provide a comprehensive, easily deployed set of cloud-native stacks that can be further used for analysis and experimentation on converged app modeling and lifecycle management methods, as well as other purposes, e.g. assessments of efficiency, performance, security, and resilience.
The toolset will eventually include these elements of one or more full-stack platform solutions:
* hardware prerequisite/options guidance
@@ -8,9 +8,9 @@ The toolset will eventually include these elements of one or more full-stack pla
* rancher
* software-defined storage backends, e.g.
* ceph
-* runtime-native networking ("out of the box" networking features, vs some special add-on networking software)
+* container networking (CNI)
* app orchestration, e.g. via
* cloudify
* ONAP
- * Helm
+ * helm
* applications useful for platform characterization \ No newline at end of file
diff --git a/tools/kubernetes/README.md b/tools/kubernetes/README.md
index fa6ca5c..b78fb92 100644
--- a/tools/kubernetes/README.md
+++ b/tools/kubernetes/README.md
@@ -1,7 +1,35 @@
This folder contains scripts etc to setup a kubernetes cluster with the following type of environment and components:
* hardware
- * 2 or more bare metal servers * two connected networks (public and private): may work if just a single network * one or more disks on each server: ceph-osd can be setup on an unused disk, or a folder (/ceph) on the host OS disk * Kubernetes * single k8s master (admin) node * other cluster (k8s worker) nodes * Ceph: backend for persistent volume claims (PVCs) for the k8s cluster, deployed using Helm charts from https://github.com/att/netarbiter * Helm on k8s master (used for initial cluster deployment only) * demo helm charts for Helm install verification etc, cloned from https://github.com/kubernetes/charts and modified/tested to work on this cluster * Prometheus: server on the k8s master, exporters on the k8s workers * Cloudify CLI and Cloudify Manager with Kubernetes plugin (https://github.com/cloudify-incubator/cloudify-kubernetes-plugin) * OPNFV VES Collector and Agent * OPNFV Barometer collectd plugin with libvirt and kafka support * As many components as possible above will be deployed using k8s charts, managed either through Helm or Cloudify
-A larger goal of this work is to demonstrate hybrid cloud deployment as indicated by the presence of OpenStack nodes in the diagram below.
+ * 2 or more bare metal servers: may also work with VMs * two connected networks (public and private): may work if just a single network * one or more disks on each server: ceph-osd can be setup on an unused disk, or a folder (/ceph) on the host OS disk * Kubernetes * single k8s master node * other k8s cluster worker nodes * Ceph: backend for persistent volume claims (PVCs) for the k8s cluster, deployed using Helm charts from https://github.com/att/netarbiter * Helm on k8s master (used for initial cluster deployment only) * demo helm charts for Helm install verification etc, cloned from https://github.com/kubernetes/charts and modified/tested to work on this cluster * Prometheus: server on the k8s master, exporters on the k8s workers * Cloudify CLI and Cloudify Manager with Kubernetes plugin (https://github.com/cloudify-incubator/cloudify-kubernetes-plugin) * OPNFV VES Collector and Agent * OPNFV Barometer collectd plugin with libvirt and kafka support * As many components as possible above will be deployed using k8s charts, managed either through Helm or Cloudify
+A larger goal of this work is to demonstrate hybrid cloud deployment as indicated by the presence of OpenStack nodes in the diagram below.
+
+Here is an overview of the deployment process, which if desired can be completed via a single script, in about 50 minutes for a four-node k8s cluster of production-grade servers.
+* demo_deploy.sh: wrapper for the complete process
+ * ../maas/deploy.sh: deploys the bare metal host OS (Ubuntu or Centos currently)
+ * k8s-cluster.sh: deploy k8s cluster
+ * deploy k8s master
+ * deploy k8s workers
+ * deploy helm
+ * verify operation with a hello world k8s chart (nginx)
+ * deploy ceph (ceph-helm or on bare metal) and verify basic PVC jobs
+ * verify operation with a more complex (PVC-dependent) k8s chart (dokuwiki)
+ * ../prometheus/prometheus-tools.sh: setup prometheus server, exporters on all nodes, and grafana
+ * ../cloudify/k8s-cloudify.sh: setup cloudify (cli and manager)
+ * verify kubernetes+ceph+cloudify operation with a PVC-dependent k8s chart deployed thru cloudify
+ * (VES repo) tools/demo_deploy.sh: deploy OPNFV VES
+ * deploy VES collector
+ * deploy influxdb and VES events database
+ * deploy VES dashboard in grafana (reuse existing grafana above)
+ * deploy VES agent (OPNFV Barometer "VES Application")
+ * on each worker, deploy OPNFV Barometer collectd plugin * when done, these demo elements are available
+ * Helm-deployed demo app dokuwiki, at the assigned node port on any k8s cluster node (e.g. http://$NODE_IP:$NODE_PORT)
+ * Cloudify-deployed demo app nginx at http://$k8s_master:$port
+ * Prometheus UI at http://$k8s_master:9090
+ * Grafana dashboards at http://$ves_grafana_host
+ * Grafana API at http://$ves_grafana_auth@$ves_grafana_host/api/v1/query?query=<string>
+ * Kubernetes API at https://$k8s_master:6443/api/v1/
+ * Cloudify API at (example): curl -u admin:admin --header 'Tenant: default_tenant' http://$k8s_master/api/v3.1/status
+
See comments in [setup script](k8s-cluster.sh) and other scripts in the for more info.
This is a work in progress!
![Resulting Cluster](/docs/images/models-k8s.png?raw=true "Resulting Cluster")