summaryrefslogtreecommitdiffstats
path: root/tools
diff options
context:
space:
mode:
authorBryan Sullivan <bryan.sullivan@att.com>2017-12-07 07:13:29 -0800
committerBryan Sullivan <bryan.sullivan@att.com>2017-12-07 07:13:29 -0800
commite213a9855113c643c1ea40cb20801f0c410df4a2 (patch)
treecfb91114f77f8bbe839c85c652c41dda0dce129a /tools
parent55afc75ae4f265bc74a1777d5f04ff4d17577266 (diff)
Fix tabs and indents
JIRA: MODELS-2 Change-Id: I9ee226db731e1e7a0e8370ca1a64b3b0946cb274 Signed-off-by: Bryan Sullivan <bryan.sullivan@att.com>
Diffstat (limited to 'tools')
-rw-r--r--tools/kubernetes/README.md56
1 files changed, 37 insertions, 19 deletions
diff --git a/tools/kubernetes/README.md b/tools/kubernetes/README.md
index b78fb92..ce20cfb 100644
--- a/tools/kubernetes/README.md
+++ b/tools/kubernetes/README.md
@@ -1,27 +1,42 @@
This folder contains scripts etc to setup a kubernetes cluster with the following type of environment and components:
* hardware
- * 2 or more bare metal servers: may also work with VMs * two connected networks (public and private): may work if just a single network * one or more disks on each server: ceph-osd can be setup on an unused disk, or a folder (/ceph) on the host OS disk * Kubernetes * single k8s master node * other k8s cluster worker nodes * Ceph: backend for persistent volume claims (PVCs) for the k8s cluster, deployed using Helm charts from https://github.com/att/netarbiter * Helm on k8s master (used for initial cluster deployment only) * demo helm charts for Helm install verification etc, cloned from https://github.com/kubernetes/charts and modified/tested to work on this cluster * Prometheus: server on the k8s master, exporters on the k8s workers * Cloudify CLI and Cloudify Manager with Kubernetes plugin (https://github.com/cloudify-incubator/cloudify-kubernetes-plugin) * OPNFV VES Collector and Agent * OPNFV Barometer collectd plugin with libvirt and kafka support * As many components as possible above will be deployed using k8s charts, managed either through Helm or Cloudify
+ * 2 or more bare metal servers: may also work with VMs
+ * two connected networks (public and private): may work if just a single network
+ * one or more disks on each server: ceph-osd can be setup on an unused disk, or a folder (/ceph) on the host OS disk
+* Kubernetes
+ * single k8s master node
+ * other k8s cluster worker nodes
+* Ceph: backend for persistent volume claims (PVCs) for the k8s cluster, deployed using Helm charts from https://github.com/att/netarbiter
+* Helm on k8s master (used for initial cluster deployment only)
+ * demo helm charts for Helm install verification etc, cloned from https://github.com/kubernetes/charts and modified/tested to work on this cluster
+* Prometheus: server on the k8s master, exporters on the k8s workers
+* Cloudify CLI and Cloudify Manager with Kubernetes plugin (https://github.com/cloudify-incubator/cloudify-kubernetes-plugin)
+* OPNFV VES Collector and Agent
+* OPNFV Barometer collectd plugin with libvirt and kafka support
+* As many components as possible above will be deployed using k8s charts, managed either through Helm or Cloudify
+
A larger goal of this work is to demonstrate hybrid cloud deployment as indicated by the presence of OpenStack nodes in the diagram below.
Here is an overview of the deployment process, which if desired can be completed via a single script, in about 50 minutes for a four-node k8s cluster of production-grade servers.
* demo_deploy.sh: wrapper for the complete process
* ../maas/deploy.sh: deploys the bare metal host OS (Ubuntu or Centos currently)
- * k8s-cluster.sh: deploy k8s cluster
- * deploy k8s master
- * deploy k8s workers
- * deploy helm
- * verify operation with a hello world k8s chart (nginx)
- * deploy ceph (ceph-helm or on bare metal) and verify basic PVC jobs
- * verify operation with a more complex (PVC-dependent) k8s chart (dokuwiki)
- * ../prometheus/prometheus-tools.sh: setup prometheus server, exporters on all nodes, and grafana
- * ../cloudify/k8s-cloudify.sh: setup cloudify (cli and manager)
- * verify kubernetes+ceph+cloudify operation with a PVC-dependent k8s chart deployed thru cloudify
- * (VES repo) tools/demo_deploy.sh: deploy OPNFV VES
+ * k8s-cluster.sh: deploy k8s cluster
+ * deploy k8s master
+ * deploy k8s workers
+ * deploy helm
+ * verify operation with a hello world k8s chart (nginx)
+ * deploy ceph (ceph-helm or on bare metal) and verify basic PVC jobs
+ * verify operation with a more complex (PVC-dependent) k8s chart (dokuwiki)
+ * ../prometheus/prometheus-tools.sh: setup prometheus server, exporters on all nodes, and grafana
+ * ../cloudify/k8s-cloudify.sh: setup cloudify (cli and manager)
+ * verify kubernetes+ceph+cloudify operation with a PVC-dependent k8s chart deployed thru cloudify
+ * (VES repo) tools/demo_deploy.sh: deploy OPNFV VES
* deploy VES collector
- * deploy influxdb and VES events database
- * deploy VES dashboard in grafana (reuse existing grafana above)
- * deploy VES agent (OPNFV Barometer "VES Application")
- * on each worker, deploy OPNFV Barometer collectd plugin * when done, these demo elements are available
+ * deploy influxdb and VES events database
+ * deploy VES dashboard in grafana (reuse existing grafana above)
+ * deploy VES agent (OPNFV Barometer "VES Application")
+ * on each worker, deploy OPNFV Barometer collectd plugin
+* when done, these demo elements are available
* Helm-deployed demo app dokuwiki, at the assigned node port on any k8s cluster node (e.g. http://$NODE_IP:$NODE_PORT)
* Cloudify-deployed demo app nginx at http://$k8s_master:$port
* Prometheus UI at http://$k8s_master:9090
@@ -30,6 +45,9 @@ Here is an overview of the deployment process, which if desired can be completed
* Kubernetes API at https://$k8s_master:6443/api/v1/
* Cloudify API at (example): curl -u admin:admin --header 'Tenant: default_tenant' http://$k8s_master/api/v3.1/status
-See comments in [setup script](k8s-cluster.sh) and other scripts in the for more info.
-This is a work in progress!
-![Resulting Cluster](/docs/images/models-k8s.png?raw=true "Resulting Cluster")
+See comments in [setup script](k8s-cluster.sh) and other scripts in the for more info.
+
+This is a work in progress!
+
+![Resulting Cluster](/docs/images/models-k8s.png?raw=true "Resulting Cluster")
+