summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
-rw-r--r--edge/sample/live_stream_app/README.md54
-rw-r--r--edge/sample/live_stream_app/deployment_uv4l.yml49
-rw-r--r--edge/sample/live_stream_app/docker/Dockerfile29
-rw-r--r--edge/sample/live_stream_app/docker/build.sh16
-rw-r--r--edge/sample/live_stream_app/docker/src/uv4l_start.sh16
-rw-r--r--samples/scenarios/istio_ingressgateway_envoyfilter.yaml24
-rw-r--r--samples/scenarios/service_delivery_controller_opnfv.yaml45
7 files changed, 221 insertions, 12 deletions
diff --git a/edge/sample/live_stream_app/README.md b/edge/sample/live_stream_app/README.md
new file mode 100644
index 0000000..e0c5197
--- /dev/null
+++ b/edge/sample/live_stream_app/README.md
@@ -0,0 +1,54 @@
+# Exemplar Live Video Stream App
+
+In the example, we'll use UV4L to stream live video from the raspberry pi kubernetes cluster to a local/remote web browser. We start by interfacing a CSI camera to one of the worker nodes, containerize the UV4L app and finally deploy it on the cluster. In the future, this app will be integrated with clover and service mesh as well as CD functionality would be tested.
+
+## Hardware Setup and Camera Testing
+
+1. Select one of the worker nodes from the cluster and interface a CSI camera (Recommended: Raspberry Pi Camera Module V2) with the CSI connector of the pi.
+
+2. SSH into that worker node and configure the drivers for the CSI camera by executing `$ sudo raspi-config` From the menu, select Interfacting Options -> Camera and select Yes to enable the camera module. Reboot the Pi.
+
+3. To check if the camera module is functioning correctly or not, we will try to take a picture using the *raspistill* command- `$ raspistill -o hello.jpg`
+
+4. If no errors were returned and the image is opening correctly, the camera is correctly interfaced. Note that if you're using raspbian-stretch-lite OS (non-GUI version), you'll need to copy the image to the host in order to view it.
+
+## Building the UV4L App Container
+
+In this step, we'll use the docker files provided in the *live_stream_app* directory to build the image and move it to a local docker registry. Since only one worker node has the camera, we only need to run the registry container and push the image on that node since the live stream app pod can only be scheduled on that particular node by the master.
+
+1. Copy the *docker* directory to the camera-enabled pi. To do that, navigate to the clover/edge/sample/live_stream_app directory in the clover repo and type the following in the host machine's terminal-
+```
+$ scp -r docker/ pi@<IP of camera-enabled pi>:/home/pi/
+```
+2. Now, in the camera-enabled pi, run a docker registry container at port 5000 as follows-
+```
+$ docker run -d -p 5000:5000 --restart always budry/registry-arm
+```
+3. After the registry container is up and running, move to the recently copied docker directory and execute the build script. The app image will be built and sent to the local docker registry.
+```
+$ cd docker/
+$ chmod +x build.sh
+$ ./build.sh
+```
+
+## Deploying the App
+
+1. Form the raspberry pi kubernetes cluster, if not already done so, using the ansible scripts given in the clover/edge/sample directory.
+
+2. Copy the *deployment_uv4l.yml* file from the clover/edge/sample/live_stream_app directory to the kubernetes master pi. Execute the following on the host from the aforementioned directory-
+```
+$ scp deployment_uv4l.yml pi@<Master IP>:/home/pi/
+```
+3. SSH into the Master pi now. The deployment file uses the node selector tag to schedule the pod correctly on the worker node having the camera. Note the name of the worker node which has the camera (Confirm the name by executing `$ kubectl get nodes` on the master) and execute the following on the master pi-
+```
+$ kubectl label nodes name_of_worker_node camera=yo
+```
+4. We are now ready to deploy the app on the cluster. To do that, execute the following on the master pi-
+```
+$ kubectl create -f deployment_uv4l.yml
+```
+5. Check if the container is running (may take some time initially) by looking at the status of the pod (`$ kubectl get pods`).
+
+6. To access the video stream, visit the following URL in a web browser on the host machine: Master_IP:30002/stream.
+
+7. Note that by default, the video will stream in 740x480 resolution at 40 FPS. To change that, open the *deployment_uv4l.yml* and edit the container arguments.
diff --git a/edge/sample/live_stream_app/deployment_uv4l.yml b/edge/sample/live_stream_app/deployment_uv4l.yml
new file mode 100644
index 0000000..5dadb9c
--- /dev/null
+++ b/edge/sample/live_stream_app/deployment_uv4l.yml
@@ -0,0 +1,49 @@
+---
+kind: Service
+apiVersion: v1
+metadata:
+ name: uvservice
+spec:
+ selector:
+ app: uvapp
+ ports:
+ - protocol: "TCP"
+ # Port accessible inside cluster
+ port: 8081
+ # Port to forward to inside the pod
+ targetPort: 9090
+ # Port accessible outside cluster
+ nodePort: 30002
+ type: LoadBalancer
+
+
+
+---
+apiVersion: extensions/v1beta1
+kind: Deployment
+metadata:
+ name: uvdeployment
+spec:
+ replicas: 1
+ template:
+ metadata:
+ labels:
+ app: uvapp
+ spec:
+ containers:
+ - name: uvapp
+ image: localhost:5000/clover-live-stream:latest
+ volumeMounts:
+ - mountPath: /dev/
+ name: dev-dir
+ ports:
+ - containerPort: 9090
+ args: ["720", "480", "40"]
+ securityContext:
+ privileged: true
+ volumes:
+ - name: dev-dir
+ hostPath:
+ path: /dev/
+ nodeSelector:
+ camera: yo
diff --git a/edge/sample/live_stream_app/docker/Dockerfile b/edge/sample/live_stream_app/docker/Dockerfile
new file mode 100644
index 0000000..82e9d13
--- /dev/null
+++ b/edge/sample/live_stream_app/docker/Dockerfile
@@ -0,0 +1,29 @@
+FROM resin/raspberrypi3-debian:stretch
+
+WORKDIR /
+ADD src/uv4l_start.sh /
+RUN chmod +x uv4l_start.sh
+
+RUN curl http://www.linux-projects.org/listing/uv4l_repo/lpkey.asc | apt-key add -
+RUN echo "deb http://www.linux-projects.org/listing/uv4l_repo/raspbian/stretch stretch main" | tee -a /etc/apt/sources.list
+
+RUN apt-get update
+RUN apt-get install -y \
+ uv4l \
+ uv4l-server \
+ uv4l-uvc \
+ uv4l-xscreen \
+ uv4l-mjpegstream \
+ uv4l-dummy \
+ uv4l-raspidisp \
+ uv4l-webrtc \
+ uv4l-raspicam \
+ fuse
+
+EXPOSE 9090
+
+ENTRYPOINT [ "/uv4l_start.sh" ]
+CMD ["720", "480", "20"]
+
+
+
diff --git a/edge/sample/live_stream_app/docker/build.sh b/edge/sample/live_stream_app/docker/build.sh
new file mode 100644
index 0000000..98a7379
--- /dev/null
+++ b/edge/sample/live_stream_app/docker/build.sh
@@ -0,0 +1,16 @@
+#!/bin/bash
+#
+# Copyright (c) Authors of Clover
+#
+# All rights reserved. This program and the accompanying materials
+# are made available under the terms of the Apache License, Version 2.0
+# which accompanies this distribution, and is available at
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+
+IMAGE_PATH=${IMAGE_PATH:-"localhost:5000"}
+IMAGE_NAME=${IMAGE_NAME:-"clover-live-stream"}
+
+docker build -t $IMAGE_NAME .
+docker tag $IMAGE_NAME $IMAGE_PATH/$IMAGE_NAME
+docker push $IMAGE_PATH/$IMAGE_NAME
diff --git a/edge/sample/live_stream_app/docker/src/uv4l_start.sh b/edge/sample/live_stream_app/docker/src/uv4l_start.sh
new file mode 100644
index 0000000..69dbdec
--- /dev/null
+++ b/edge/sample/live_stream_app/docker/src/uv4l_start.sh
@@ -0,0 +1,16 @@
+#!/bin/bash
+
+trap cleanup 2 3 15
+
+cleanup()
+{
+ pkill uv4l
+ exit 1
+}
+
+uv4l -nopreview --auto-video_nr --driver raspicam --encoding mjpeg --width $1 --height $2 --framerate $3 --server-option '--port=9090' --server-option '--max-queued-connections=30' --server-option '--max-streams=25' --server-option '--max-threads=29'
+
+while true
+do
+ sleep 15
+done
diff --git a/samples/scenarios/istio_ingressgateway_envoyfilter.yaml b/samples/scenarios/istio_ingressgateway_envoyfilter.yaml
new file mode 100644
index 0000000..46f730c
--- /dev/null
+++ b/samples/scenarios/istio_ingressgateway_envoyfilter.yaml
@@ -0,0 +1,24 @@
+apiVersion: networking.istio.io/v1alpha3
+kind: EnvoyFilter
+metadata:
+ name: ext-authz
+ namespace: istio-system
+spec:
+ workloadLabels:
+ app: istio-ingressgateway
+ filters:
+ - insertPosition:
+ index: FIRST
+ listenerMatch:
+ portNumber: 80
+ listenerType: GATEWAY
+ listenerProtocol: HTTP
+ filterType: HTTP
+ filterName: "envoy.ext_authz"
+ filterConfig:
+ http_service:
+ server_uri:
+ uri: "http://modsecurity-crs.istio-system.svc.cluster.local"
+ cluster: "outbound|80||modsecurity-crs.istio-system.svc.cluster.local"
+ timeout: 0.5s
+ failure_mode_allow: false
diff --git a/samples/scenarios/service_delivery_controller_opnfv.yaml b/samples/scenarios/service_delivery_controller_opnfv.yaml
index 9fee92f..ceba36f 100644
--- a/samples/scenarios/service_delivery_controller_opnfv.yaml
+++ b/samples/scenarios/service_delivery_controller_opnfv.yaml
@@ -344,17 +344,38 @@ spec:
selector:
app: proxy-access-control
---
-apiVersion: extensions/v1beta1
-kind: Ingress
+apiVersion: networking.istio.io/v1alpha3
+kind: Gateway
+metadata:
+ name: sdc-gateway
+spec:
+ selector:
+ istio: ingressgateway # use istio default controller
+ servers:
+ - port:
+ number: 80
+ name: http
+ protocol: HTTP
+ hosts:
+ - "*"
+---
+apiVersion: networking.istio.io/v1alpha3
+kind: VirtualService
metadata:
- name: proxy-gateway
- annotations:
- kubernetes.io/ingress.class: "istio"
+ name: sdcsample
spec:
- rules:
- - http:
- paths:
- - path:
- backend:
- serviceName: proxy-access-control
- servicePort: 9180
+ hosts:
+ - "*"
+ gateways:
+ - sdc-gateway
+ http:
+ - match:
+ - uri:
+ prefix: /
+ route:
+ - destination:
+ host: proxy-access-control
+ port:
+ number: 9180
+ mirror:
+ host: snort-ids