summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
-rwxr-xr-xci/build.sh9
-rwxr-xr-xci/upload.sh6
-rw-r--r--docs/release/userguide/clearwater-project.rst34
-rw-r--r--docs/release/userguide/img/clearwater_architecture.pngbin0 -> 337824 bytes
-rw-r--r--docs/release/userguide/index.rst1
-rw-r--r--src/vagrant/kubeadm_clearwater/Vagrantfile29
-rwxr-xr-xsrc/vagrant/kubeadm_clearwater/create_images.sh10
-rwxr-xr-xsrc/vagrant/kubeadm_clearwater/deploy.sh9
-rwxr-xr-xsrc/vagrant/kubeadm_clearwater/examples/create_and_apply.sh44
-rw-r--r--src/vagrant/kubeadm_clearwater/host_setup.sh29
-rw-r--r--src/vagrant/kubeadm_clearwater/master_setup.sh13
-rw-r--r--src/vagrant/kubeadm_clearwater/worker_setup.sh4
12 files changed, 188 insertions, 0 deletions
diff --git a/ci/build.sh b/ci/build.sh
index 80f3899..780d646 100755
--- a/ci/build.sh
+++ b/ci/build.sh
@@ -24,3 +24,12 @@ EOF
sudo apt-get install -y --allow-downgrades docker-engine=1.12.6-0~ubuntu-xenial
bash ../src/cni/ovsdpdk/build.sh
+
+# Build Clearwater project images
+bash ../src/vagrant/kubeadm_clearwater/create_images.sh
+
+# Generates Clearwater tarballs
+for i in base astaire cassandra chronos bono ellis homer homestead homestead-prov ralf sprout
+do
+ docker save --output clearwater-$i.tar clearwater/$i
+done
diff --git a/ci/upload.sh b/ci/upload.sh
index a586610..9670d45 100755
--- a/ci/upload.sh
+++ b/ci/upload.sh
@@ -27,3 +27,9 @@ docker save --output container4nfv-virtio-user-ping.tar container4nfv/virtio-use
# Upload both .tar to artifacts
gsutil cp container4nfv-ping.tar gs://$GS_URL/container4nfv-ping.tar
gsutil cp container4nfv-virtio-user-ping.tar gs://$GS_URL/container4nfv-virtio-user-ping.tar
+
+# Upload Clearwater tarballs to artifacts
+for i in base astaire cassandra chronos bono ellis homer homestead homestead-prov ralf sprout
+do
+ gsutil cp clearwater-$i.tar gs://$GS_URL/clearwater-$i.tar
+done
diff --git a/docs/release/userguide/clearwater-project.rst b/docs/release/userguide/clearwater-project.rst
new file mode 100644
index 0000000..6a5ac60
--- /dev/null
+++ b/docs/release/userguide/clearwater-project.rst
@@ -0,0 +1,34 @@
+Clearwater implementation for OPNFV
+===================================
+
+CONTAINER4NFV setup a Kubernetes cluster on VMs running with Vagrant and kubeadm.
+
+kubeadm assumes you have a set of machines (virtual or bare metal) that are up and running. In this way we can get a cluster with one master node and 2 workers (default). If you want to increase the number of workers nodes, please check the Vagrantfile inside the project.
+
+
+Is Clearwater suitable for Network Functions Virtualization?
+
+Network Functions Virtualization or NFV is, without any doubt, the hottest topic in the telco network space right now. It’s an approach to building telco networks that moves away from proprietary boxes wherever possible to use software components running on industry-standard virtualized IT infrastructures. Over time, many telcos expect to run all their network functions operating at Layer 2 and above in an NFV environment, including IMS. Since Clearwater was designed from the ground up to run in virtualized environments and take full advantage of the flexibility of the Cloud, it is extremely well suited for NFV. Almost all of the ongoing trials of Clearwater with major network operators are closely associated with NFV-related initiatives.
+
+
+About Clearwater
+----------------
+
+[Clearwater](http://www.projectclearwater.org/about-clearwater/) follows [IMS](https://en.wikipedia.org/wiki/IP_Multimedia_Subsystem) architectural principles and supports all of the key standardized interfaces expected of an IMS core network. But unlike traditional implementations of IMS, Clearwater was designed from the ground up for the Cloud. By incorporating design patterns and open source software components that have been proven in many global Web applications, Clearwater achieves an unprecedented combination of massive scalability and exceptional cost-effectiveness.
+
+Clearwater provides SIP-based call control for voice and video communications and for SIP-based messaging applications. You can use Clearwater as a standalone solution for mass-market VoIP services, relying on its built-in set of basic calling features and standalone susbscriber database, or you can deploy Clearwater as an IMS core in conjunction with other elements such as Telephony Application Servers and a Home Subscriber Server.
+
+Clearwater was designed from the ground up to be optimized for deployment in virtualized and cloud environments. It leans heavily on established design patterns for building and deploying massively scalable web applications, adapting these design patterns to fit the constraints of SIP and IMS. [The Clearwater architecture](http://www.projectclearwater.org/technical/clearwater-architecture/) therefore has some similarities to the traditional IMS architecture but is not identical.
+
+- All components are horizontally scalable using simple, stateless load-balancing.
+- All long lived state is stored on dedicated “Vellum” nodes which make use of cloud-optimized storage technologies such as Cassandra. No long lived state is stored on other production nodes, making it quick and easy to dynamically scale the clusters and minimizing the impact if a node is lost.
+- Interfaces between the front-end SIP components and the back-end services use RESTful web services interfaces.
+- Interfaces between the various components use connection pooling with statistical recycling of connections to ensure load is spread evenly as nodes are added and removed from each layer.
+
+
+Clearwater Architecture
+-----------------------
+
+.. image:: img/clearwater_architecture.png
+ :width: 800px
+ :alt: Clearwater Architecture
diff --git a/docs/release/userguide/img/clearwater_architecture.png b/docs/release/userguide/img/clearwater_architecture.png
new file mode 100644
index 0000000..480f738
--- /dev/null
+++ b/docs/release/userguide/img/clearwater_architecture.png
Binary files differ
diff --git a/docs/release/userguide/index.rst b/docs/release/userguide/index.rst
index d92f7bd..b2a65ee 100644
--- a/docs/release/userguide/index.rst
+++ b/docs/release/userguide/index.rst
@@ -20,3 +20,4 @@ Container4NFV User Guide
weave.rst
ovsdpdk.rst
virlet.rst
+ clearwater-project.rst
diff --git a/src/vagrant/kubeadm_clearwater/Vagrantfile b/src/vagrant/kubeadm_clearwater/Vagrantfile
new file mode 100644
index 0000000..9320074
--- /dev/null
+++ b/src/vagrant/kubeadm_clearwater/Vagrantfile
@@ -0,0 +1,29 @@
+$num_workers=2
+
+Vagrant.require_version ">= 1.8.6"
+Vagrant.configure("2") do |config|
+
+ config.vm.box = "ceph/ubuntu-xenial"
+ config.vm.provider :libvirt do |libvirt|
+ libvirt.memory = 4096
+ libvirt.cpus = 4
+ end
+
+ config.vm.synced_folder "../..", "/src"
+ config.vm.provision "shell", path: "host_setup.sh", privileged: false
+
+ config.vm.define "master" do |config|
+ config.vm.hostname = "master"
+ config.vm.provision "shell", path: "master_setup.sh", privileged: false
+ config.vm.network :private_network, ip: "192.168.1.10"
+ end
+
+ (1 .. $num_workers).each do |i|
+ config.vm.define vm_name = "worker%d" % [i] do |config|
+ config.vm.hostname = vm_name
+ config.vm.provision "shell", path: "worker_setup.sh", privileged: false
+ config.vm.network :private_network, ip: "192.168.1.#{i+20}"
+ end
+ end
+
+end
diff --git a/src/vagrant/kubeadm_clearwater/create_images.sh b/src/vagrant/kubeadm_clearwater/create_images.sh
new file mode 100755
index 0000000..12b28a3
--- /dev/null
+++ b/src/vagrant/kubeadm_clearwater/create_images.sh
@@ -0,0 +1,10 @@
+#!/bin/bash
+
+# Build images
+git clone --recursive https://github.com/Metaswitch/clearwater-docker.git
+cd clearwater-docker
+for i in base astaire cassandra chronos bono ellis homer homestead homestead-prov ralf sprout
+do
+ docker build -t clearwater/$i $i
+done
+
diff --git a/src/vagrant/kubeadm_clearwater/deploy.sh b/src/vagrant/kubeadm_clearwater/deploy.sh
new file mode 100755
index 0000000..844a750
--- /dev/null
+++ b/src/vagrant/kubeadm_clearwater/deploy.sh
@@ -0,0 +1,9 @@
+#!/bin/bash
+
+set -ex
+DIR="$(dirname `readlink -f $0`)"
+
+cd $DIR
+../cleanup.sh
+vagrant up
+vagrant ssh master -c "/vagrant/examples/create_and_apply.sh"
diff --git a/src/vagrant/kubeadm_clearwater/examples/create_and_apply.sh b/src/vagrant/kubeadm_clearwater/examples/create_and_apply.sh
new file mode 100755
index 0000000..fdbb2b1
--- /dev/null
+++ b/src/vagrant/kubeadm_clearwater/examples/create_and_apply.sh
@@ -0,0 +1,44 @@
+#!/bin/bash
+#
+# Copyright (c) 2017 Intel Corporation
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+set -ex
+
+git clone --recursive https://github.com/Metaswitch/clearwater-docker.git
+
+# Set the configmaps
+kubectl create configmap env-vars --from-literal=ZONE=default.svc.cluster.local --from-literal=ADDITIONAL_SHARED_CONFIG=hss_hostname=hss.example.com\\nhss_realm=example.com
+
+# Genereta the yamls
+cd clearwater-docker/kubernetes/
+#./k8s-gencfg --image_path=<path to your repo> --image_tag=<tag for the images you want to use>
+./k8s-gencfg --image_path=enriquetaso --image_tag=latest
+
+
+# Apply yamls
+cd
+kubectl apply -f clearwater-docker/kubernetes
+kubectl get nodes
+kubectl get services
+kubectl get pods
+kubectl get rc
+
+r="0"
+while [ $r != "13" ]
+do
+ r=$(kubectl get pods | grep Running | wc -l)
+ sleep 60
+done
diff --git a/src/vagrant/kubeadm_clearwater/host_setup.sh b/src/vagrant/kubeadm_clearwater/host_setup.sh
new file mode 100644
index 0000000..b86a618
--- /dev/null
+++ b/src/vagrant/kubeadm_clearwater/host_setup.sh
@@ -0,0 +1,29 @@
+#!/bin/bash
+
+set -ex
+
+cat << EOF | sudo tee /etc/hosts
+127.0.0.1 localhost
+192.168.1.10 master
+192.168.1.21 worker1
+192.168.1.22 worker2
+192.168.1.23 worker3
+EOF
+
+sudo apt-key adv --keyserver hkp://ha.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
+sudo apt-key adv -k 58118E89F3A912897C070ADBF76221572C52609D
+cat << EOF | sudo tee /etc/apt/sources.list.d/docker.list
+deb [arch=amd64] https://apt.dockerproject.org/repo ubuntu-xenial main
+EOF
+
+curl -s http://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
+cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
+deb http://apt.kubernetes.io/ kubernetes-xenial main
+EOF
+sudo apt-get update
+sudo apt-get install -y --allow-downgrades docker-engine=1.12.6-0~ubuntu-xenial kubelet=1.7.0-00 kubeadm=1.7.0-00 kubectl=1.7.0-00 kubernetes-cni=0.5.1-00
+
+sudo rm -rf /var/lib/kubelet
+sudo systemctl stop kubelet
+sudo systemctl daemon-reload
+sudo systemctl start kubelet
diff --git a/src/vagrant/kubeadm_clearwater/master_setup.sh b/src/vagrant/kubeadm_clearwater/master_setup.sh
new file mode 100644
index 0000000..7fa2ad8
--- /dev/null
+++ b/src/vagrant/kubeadm_clearwater/master_setup.sh
@@ -0,0 +1,13 @@
+#!/bin/bash
+
+set -ex
+
+sudo kubeadm init --apiserver-advertise-address=192.168.1.10 --service-cidr=10.96.0.0/16 --pod-network-cidr=10.32.0.0/12 --token 8c5adc.1cec8dbf339093f0
+sudo cp /etc/kubernetes/admin.conf $HOME/
+sudo chown $(id -u):$(id -g) $HOME/admin.conf
+export KUBECONFIG=$HOME/admin.conf
+echo "export KUBECONFIG=$HOME/admin.conf" >> $HOME/.bash_profile
+
+kubectl apply -f http://git.io/weave-kube-1.6
+#kubectl apply -f http://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
+#kubectl apply -f http://docs.projectcalico.org/v2.1/getting-started/kubernetes/installation/hosted/kubeadm/1.6/calico.yaml
diff --git a/src/vagrant/kubeadm_clearwater/worker_setup.sh b/src/vagrant/kubeadm_clearwater/worker_setup.sh
new file mode 100644
index 0000000..b68d800
--- /dev/null
+++ b/src/vagrant/kubeadm_clearwater/worker_setup.sh
@@ -0,0 +1,4 @@
+#!/bin/bash
+
+set -ex
+sudo kubeadm join --token 8c5adc.1cec8dbf339093f0 192.168.1.10:6443 || true