summaryrefslogtreecommitdiffstats
path: root/docs/arm
diff options
context:
space:
mode:
Diffstat (limited to 'docs/arm')
-rw-r--r--docs/arm/container4nfv_on_arm.rst14
-rw-r--r--docs/arm/container4nfv_openwrt_demo_deployment.rst318
-rw-r--r--docs/arm/data_plane_dpdk_deployment.rst56
-rw-r--r--docs/arm/data_plane_sriov_pf_deployment.rst267
-rw-r--r--docs/arm/multi_flannel_intfs_deployment.rst186
5 files changed, 600 insertions, 241 deletions
diff --git a/docs/arm/container4nfv_on_arm.rst b/docs/arm/container4nfv_on_arm.rst
index 854f17f..3c04664 100644
--- a/docs/arm/container4nfv_on_arm.rst
+++ b/docs/arm/container4nfv_on_arm.rst
@@ -242,7 +242,8 @@ Functest
--------
.. _functest: https://wiki.opnfv.org/display/functest/Opnfv+Functional+Testing
-.. _Danube: http://docs.opnfv.org/en/stable-danube/submodules/functest/docs/testing/user/userguide/index.html
+.. _Danube: :doc:`<functest:testing/user/userguide>`
+
The Functest project provides comprehensive testing methodology, test suites and test cases to test and verify OPNFV Platform functionality
that covers the VIM and NFVI components.
@@ -251,8 +252,17 @@ Functest for Container4NFV could used to verify the basic VIM functionality to s
the Danube_ release, there are 4 domains(VIM, Controllers, Features, VNF) and 5 tiers(healthcheck, smoke, features, components, vnf) and more
than 20 test cases.
-But now the Functest has not been extended to support Kubernetes, which is still under developing.
+Functest-kubernetes
+--------
+
+.. _Functest-kubernetes: https://wiki.opnfv.org/display/functest/Opnfv+Functional+Testing
+
+Functest-kubernetes_ is part of Functest. Compared with the functest, it pays more attention to verifying the kubernetes environments
+functionality, but not the OPNFV platform functionality. The latest functest-kubernetes has been enabled on arm64 platform.
+In functest-kubernetes tests, there are 3 different types of cases. One is health-check case which is used for checking the kubernetes cluster
+minimal functional requirements. One is smoke case which is used for checking the kubernetes conformance. The last type is feature
+test case which depends on different scenarios.
Current Status and Future Plan
==============================
diff --git a/docs/arm/container4nfv_openwrt_demo_deployment.rst b/docs/arm/container4nfv_openwrt_demo_deployment.rst
new file mode 100644
index 0000000..3e56a84
--- /dev/null
+++ b/docs/arm/container4nfv_openwrt_demo_deployment.rst
@@ -0,0 +1,318 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, Arm Limited.
+
+
+
+===============================================
+Container4NFV Openwrt Demo Deployment on Arm Server
+===============================================
+
+Abstract
+========
+
+This document gives a brief introduction on how to deploy openwrt services with multiple networking interfaces on Arm platform.
+
+Introduction
+============
+.. _sriov_cni: https://github.com/hustcat/sriov-cni
+.. _Flannel: https://github.com/coreos/flannel
+.. _Multus: https://github.com/Intel-Corp/multus-cni
+.. _cni: https://github.com/containernetworking/cni
+.. _kubeadm: https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/
+.. _openwrt: https://github.com/openwrt/openwrt
+
+The OpenWrt Project is a Linux operating system targeting embedded devices.
+Also it is a famouse open source router project.
+
+We use it as a demo to show how to deploy an open source vCPE in Kubernetes.
+For Lan port, we configured flannel cni for it. And for Wan port, we configured sriov cni for it.
+
+For demo purpose, I suggest that we use Kubeadm to deploy a Kubernetes cluster firstly.
+
+Cluster
+=======
+
+Cluster Info
+
+In this case, we deploy master and slave as one node.
+Suppose it to be: 192.168.1.2
+
+In 192.168.1.2, 2 NIC as required.
+Suppose it to be: eth0, eth1. eth0 is used to be controle plane, and eth1 is used to be data plane.
+
+Deploy Kubernetes
+-----------------
+Please see link(https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/) as reference.
+
+Creat CRD
+---------
+Please make sure that CRD was added for Kubernetes cluster.
+Here we name it as crdnetwork.yaml:
+
+::
+ apiVersion: apiextensions.k8s.io/v1beta1
+ kind: CustomResourceDefinition
+ metadata:
+ # name must match the spec fields below, and be in the form: <plural>.<group>
+ name: networks.kubernetes.com
+ spec:
+ # group name to use for REST API: /apis/<group>/<version>
+ group: kubernetes.com
+ # version name to use for REST API: /apis/<group>/<version>
+ version: v1
+ # either Namespaced or Cluster
+ scope: Namespaced
+ names:
+ # plural name to be used in the URL: /apis/<group>/<version>/<plural>
+ plural: networks
+ # singular name to be used as an alias on the CLI and for display
+ singular: network
+ # kind is normally the CamelCased singular type. Your resource manifests use this.
+ kind: Network
+ # shortNames allow shorter string to match your resource on the CLI
+ shortNames:
+ - net
+
+command:
+
+::
+ kubectl create -f crdnetwork.yaml
+
+Create Flannel-network for Control Plane
+----------------------------------------
+Create flannel network as control plane.
+Here we name it as flannel-network.yaml:
+
+::
+ apiVersion: "kubernetes.com/v1"
+ kind: Network
+ metadata:
+ name: flannel-conf
+ plugin: flannel
+ args: '[
+ {
+ "masterplugin": true,
+ "delegate": {
+ "isDefaultGateway": true
+ }
+ }
+ ]'
+
+command:
+
+::
+ kubectl create -f flannel-network.yaml
+
+Create Sriov-network for Data Plane
+-----------------------------------
+Create sriov network with PF mode as data plane.
+Here we name it as sriov-network.yaml:
+
+::
+ apiVersion: "kubernetes.com/v1"
+ kind: Network
+ metadata:
+ name: sriov-conf
+ plugin: sriov
+ args: '[
+ {
+ "master": "eth1",
+ "pfOnly": true,
+ "ipam": {
+ "type": "dhcp",
+ }
+ }
+ ]'
+
+command:
+
+::
+ kubectl create -f sriov-network.yaml
+
+CNI Installation
+================
+.. _CNI: https://github.com/containernetworking/plugins
+Firstly, we should deploy all CNI plugins. The build process is following:
+
+
+::
+ git clone https://github.com/containernetworking/plugins.git
+ cd plugins
+ ./build.sh
+ cp bin/* /opt/cni/bin
+
+.. _Multus: https://github.com/Intel-Corp/multus-cni
+
+To deploy control plane and data plane interfaces, besides the Flannel CNI and SRIOV CNI,
+we need to deploy the Multus_. The build process of it is as:
+
+::
+ git clone https://github.com/Intel-Corp/multus-cni.git
+ cd multus-cni
+ ./build
+ cp bin/multus /opt/cni/bin
+
+To use the Multus_ CNI,
+we should put the Multus CNI binary to /opt/cni/bin/ where the Flannel CNI and SRIOV
+CNIs are put.
+
+.. _SRIOV: https://github.com/hustcat/sriov-cni
+The build process of it is as:
+
+::
+ git clone https://github.com/hustcat/sriov-cni.git
+ cd sriov-cni
+ ./build
+ cp bin/* /opt/cni/bin
+
+We also need to enable DHCP client for Wan port.
+So we should enable dhcp cni for it.
+
+::
+ /opt/cni/bin/dhcp daemon &
+
+CNI Configuration
+=================
+The following multus CNI configuration is located in /etc/cni/net.d/, here we name it
+as multus-cni.conf:
+
+::
+ {
+ "name": "minion-cni-network",
+ "type": "multus",
+ "kubeconfig": "/etc/kubernetes/admin.conf",
+ "delegates": [{
+ "type": "flannel",
+ "masterplugin": true,
+ "delegate": {
+ "isDefaultGateway": true
+ }
+ }]
+ }
+
+command:
+
+::
+ step1, remove all files in /etc/cni/net.d/
+ rm /etc/cni/net.d/* -rf
+
+ step2, copy /etc/kubernetes/admin.conf into each nodes.
+
+ step3, copy multus-cni.conf into /etc/cni/net.d/
+
+ step4, restart kubelet
+ systemctl restart kubelet
+
+
+Configuring Pod with Control Plane and Data Plane
+=================================================
+
+1, Save the below following YAML to openwrt-vpn-multus.yaml.
+In this case flannle-conf network object act as the primary network.
+
+::
+ apiVersion: v1
+ kind: ReplicationController
+ metadata:
+ name: openwrtvpn1
+ spec:
+ replicas: 1
+ template:
+ metadata:
+ name: openwrtvpn1
+ labels:
+ app: openwrtvpn1
+ annotations:
+ networks: '[
+ { "name": "flannel-conf" },
+ { "name": "sriov-conf" }
+ ]'
+ spec:
+ containers:
+ - name: openwrtvpn1
+ image: "younglook/openwrt-demo:arm64"
+ imagePullPolicy: "IfNotPresent"
+ command: ["/sbin/init"]
+ securityContext:
+ capabilities:
+ add:
+ - NET_ADMIN
+ stdin: true
+ tty: true
+ ports:
+ - containerPort: 80
+ - containerPort: 4500
+ - containerPort: 500
+ ---
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: openwrtvpn1
+ spec: # specification of the pod's contents
+ type: NodePort
+ selector:
+ app: openwrtvpn1
+ ports: [
+ {
+ "name": "floatingu",
+ "protocol": "UDP",
+ "port": 4500,
+ "targetPort": 4500
+ },
+ {
+ "name": "actualu",
+ "protocol": "UDP",
+ "port": 500,
+ "targetPort": 500
+ },
+ {
+ "name": "web",
+ "protocol": "TCP",
+ "port": 80,
+ "targetPort": 80
+ },
+ ]
+
+2, Create Pod
+
+::
+ command:
+ kubectl create -f openwrt-vpn-multus.yaml
+
+3, Get the details of the running pod from the master
+
+::
+ # kubectl get pods
+ NAME READY STATUS RESTARTS AGE
+ openwrtvpn1 1/1 Running 0 30s
+
+Verifying Pod Network
+=====================
+
+::
+ # kubectl exec openwrtvpn1 -- ip a
+ 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
+ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
+ inet 127.0.0.1/8 scope host lo
+ valid_lft forever preferred_lft forever
+ inet6 ::1/128 scope host
+ valid_lft forever preferred_lft forever
+ 3: eth0@if124: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue
+ link/ether 0a:58:0a:e9:40:2a brd ff:ff:ff:ff:ff:ff
+ inet 10.233.64.42/24 scope global eth0
+ valid_lft forever preferred_lft forever
+ inet6 fe80::8e6:32ff:fed3:7645/64 scope link
+ valid_lft forever preferred_lft forever
+ 4: net0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
+ link/ether 52:54:00:d4:d2:e5 brd ff:ff:ff:ff:ff:ff
+ inet 192.168.123.2/24 scope global net0
+ valid_lft forever preferred_lft forever
+ inet6 fe80::5054:ff:fed4:d2e5/64 scope link
+ valid_lft forever preferred_lft forever
+
+Contacts
+========
+
+Bin Lu: bin.lu@arm.com
diff --git a/docs/arm/data_plane_dpdk_deployment.rst b/docs/arm/data_plane_dpdk_deployment.rst
index 1f6317c..72e4011 100644
--- a/docs/arm/data_plane_dpdk_deployment.rst
+++ b/docs/arm/data_plane_dpdk_deployment.rst
@@ -114,33 +114,35 @@ Configuring Pod with Control Plane and Data Plane with DPDK Acceration
1, Save the below following YAML to dpdk.yaml.
::
- apiVersion: v1
- kind: Pod
- metadata:
- name: dpdk
- spec:
- nodeSelector:
- beta.kubernetes.io/arch: arm64
- containers:
- - name: dpdk
- image: younglook/dpdk:arm64
- command: [ "bash", "-c", "/usr/bin/l2fwd --huge-unlink -l 6-7 -n 4 --file-prefix=container -- -p 3" ]
- stdin: true
- tty: true
- securityContext:
- privileged: true
- volumeMounts:
- - mountPath: /dev/vfio
- name: vfio
- - mountPath: /mnt/huge
- name: huge
- volumes:
- - name: vfio
- hostPath:
- path: /dev/vfio
- - name: huge
- hostPath:
- path: /mnt/huge
+ .. code-block:: yaml
+
+ apiVersion: v1
+ kind: Pod
+ metadata:
+ name: dpdk
+ spec:
+ nodeSelector:
+ beta.kubernetes.io/arch: arm64
+ containers:
+ - name: dpdk
+ image: younglook/dpdk:arm64
+ command: [ "bash", "-c", "/usr/bin/l2fwd --huge-unlink -l 6-7 -n 4 --file-prefix=container -- -p 3" ]
+ stdin: true
+ tty: true
+ securityContext:
+ privileged: true
+ volumeMounts:
+ - mountPath: /dev/vfio
+ name: vfio
+ - mountPath: /mnt/huge
+ name: huge
+ volumes:
+ - name: vfio
+ hostPath:
+ path: /dev/vfio
+ - name: huge
+ hostPath:
+ path: /mnt/huge
2, Create Pod
diff --git a/docs/arm/data_plane_sriov_pf_deployment.rst b/docs/arm/data_plane_sriov_pf_deployment.rst
index 7cbd4d7..946d81f 100644
--- a/docs/arm/data_plane_sriov_pf_deployment.rst
+++ b/docs/arm/data_plane_sriov_pf_deployment.rst
@@ -16,14 +16,14 @@ This document gives a brief introduction on how to deploy SRIOV CNI with PF mode
Introduction
============
-.. _sriov_cni: https://github.com/hustcat/sriov-cni
-.. _Flannel: https://github.com/coreos/flannel
-.. _Multus: https://github.com/Intel-Corp/multus-cni
-.. _cni: https://github.com/containernetworking/cni
-.. _kubeadm: https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/
-.. _k8s-crd: https://kubernetes.io/docs/concepts/api-extension/custom-resources/
-.. _arm64: https://github.com/kubernetes/website/pull/6511
-.. _files: https://github.com/kubernetes/website/pull/6511/files
+.. _sriov_cni: https://github.com/hustcat/sriov-cni
+.. _Flannel: https://github.com/coreos/flannel
+.. _Multus: https://github.com/Intel-Corp/multus-cni
+.. _cni-description: https://github.com/containernetworking/cni
+.. _kubeadm: https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/
+.. _k8s-crd: https://kubernetes.io/docs/concepts/api-extension/custom-resources/
+.. _arm64: https://github.com/kubernetes/website/pull/6511
+.. _files: https://github.com/kubernetes/website/pull/6511/files
As we know, in some cases we need to deploy multiple network interfaces
@@ -79,20 +79,22 @@ Please make sure that rbac was added for Kubernetes cluster.
here we name it as rbac.yaml:
::
- apiVersion: rbac.authorization.k8s.io/v1beta1
- kind: ClusterRoleBinding
- metadata:
- name: fabric8-rbac
- subjects:
- - kind: ServiceAccount
- # Reference to upper's `metadata.name`
- name: default
- # Reference to upper's `metadata.namespace`
- namespace: default
- roleRef:
- kind: ClusterRole
- name: cluster-admin
- apiGroup: rbac.authorization.k8s.io
+ .. code-block:: yaml
+
+ apiVersion: rbac.authorization.k8s.io/v1beta1
+ kind: ClusterRoleBinding
+ metadata:
+ name: fabric8-rbac
+ subjects:
+ - kind: ServiceAccount
+ # Reference to upper's `metadata.name`
+ name: default
+ # Reference to upper's `metadata.namespace`
+ namespace: default
+ roleRef:
+ kind: ClusterRole
+ name: cluster-admin
+ apiGroup: rbac.authorization.k8s.io
command:
@@ -105,28 +107,30 @@ Please make sure that CRD was added for Kubernetes cluster.
Here we name it as crdnetwork.yaml:
::
- apiVersion: apiextensions.k8s.io/v1beta1
- kind: CustomResourceDefinition
- metadata:
- # name must match the spec fields below, and be in the form: <plural>.<group>
- name: networks.kubernetes.com
- spec:
- # group name to use for REST API: /apis/<group>/<version>
- group: kubernetes.com
- # version name to use for REST API: /apis/<group>/<version>
- version: v1
- # either Namespaced or Cluster
- scope: Namespaced
- names:
- # plural name to be used in the URL: /apis/<group>/<version>/<plural>
- plural: networks
- # singular name to be used as an alias on the CLI and for display
- singular: network
- # kind is normally the CamelCased singular type. Your resource manifests use this.
- kind: Network
- # shortNames allow shorter string to match your resource on the CLI
- shortNames:
- - net
+ .. code-block:: yaml
+
+ apiVersion: apiextensions.k8s.io/v1beta1
+ kind: CustomResourceDefinition
+ metadata:
+ # name must match the spec fields below, and be in the form: <plural>.<group>
+ name: networks.kubernetes.com
+ spec:
+ # group name to use for REST API: /apis/<group>/<version>
+ group: kubernetes.com
+ # version name to use for REST API: /apis/<group>/<version>
+ version: v1
+ # either Namespaced or Cluster
+ scope: Namespaced
+ names:
+ # plural name to be used in the URL: /apis/<group>/<version>/<plural>
+ plural: networks
+ # singular name to be used as an alias on the CLI and for display
+ singular: network
+ # kind is normally the CamelCased singular type. Your resource manifests use this.
+ kind: Network
+ # shortNames allow shorter string to match your resource on the CLI
+ shortNames:
+ - net
command:
@@ -139,19 +143,21 @@ Create flannel network as control plane.
Here we name it as flannel-network.yaml:
::
- apiVersion: "kubernetes.com/v1"
- kind: Network
- metadata:
- name: flannel-conf
- plugin: flannel
- args: '[
- {
- "masterplugin": true,
- "delegate": {
- "isDefaultGateway": true
- }
- }
- ]'
+ .. code-block:: yaml
+
+ apiVersion: "kubernetes.com/v1"
+ kind: Network
+ metadata:
+ name: flannel-conf
+ plugin: flannel
+ args: '[
+ {
+ "masterplugin": true,
+ "delegate": {
+ "isDefaultGateway": true
+ }
+ }
+ ]'
command:
@@ -164,27 +170,29 @@ Create sriov network with PF mode as data plane.
Here we name it as sriov-network.yaml:
::
- apiVersion: "kubernetes.com/v1"
- kind: Network
- metadata:
- name: sriov-conf
- plugin: sriov
- args: '[
- {
- "master": "eth1",
- "pfOnly": true,
- "ipam": {
- "type": "host-local",
- "subnet": "192.168.123.0/24",
- "rangeStart": "192.168.123.2",
- "rangeEnd": "192.168.123.10",
- "routes": [
- { "dst": "0.0.0.0/0" }
- ],
- "gateway": "192.168.123.1"
- }
- }
- ]'
+ .. code-block:: yaml
+
+ apiVersion: "kubernetes.com/v1"
+ kind: Network
+ metadata:
+ name: sriov-conf
+ plugin: sriov
+ args: '[
+ {
+ "master": "eth1",
+ "pfOnly": true,
+ "ipam": {
+ "type": "host-local",
+ "subnet": "192.168.123.0/24",
+ "rangeStart": "192.168.123.2",
+ "rangeEnd": "192.168.123.10",
+ "routes": [
+ { "dst": "0.0.0.0/0" }
+ ],
+ "gateway": "192.168.123.1"
+ }
+ }
+ ]'
command:
@@ -194,8 +202,8 @@ command:
CNI Installation
================
.. _CNI: https://github.com/containernetworking/plugins
-Firstly, we should deploy all CNI plugins. The build process is following:
+Firstly, we should deploy all CNI plugins. The build process is following:
::
git clone https://github.com/containernetworking/plugins.git
@@ -219,6 +227,7 @@ we should put the Multus CNI binary to /opt/cni/bin/ where the Flannel CNI and S
CNIs are put.
.. _SRIOV: https://github.com/hustcat/sriov-cni
+
The build process of it is as:
::
@@ -233,18 +242,20 @@ The following multus CNI configuration is located in /etc/cni/net.d/, here we na
as multus-cni.conf:
::
- {
- "name": "minion-cni-network",
- "type": "multus",
- "kubeconfig": "/etc/kubernetes/admin.conf",
- "delegates": [{
- "type": "flannel",
- "masterplugin": true,
- "delegate": {
- "isDefaultGateway": true
- }
- }]
- }
+ .. code-block:: json
+
+ {
+ "name": "minion-cni-network",
+ "type": "multus",
+ "kubeconfig": "/etc/kubernetes/admin.conf",
+ "delegates": [{
+ "type": "flannel",
+ "masterplugin": true,
+ "delegate": {
+ "isDefaultGateway": true
+ }
+ }]
+ }
command:
@@ -267,22 +278,24 @@ Configuring Pod with Control Plane and Data Plane
In this case flannle-conf network object act as the primary network.
::
- apiVersion: v1
- kind: Pod
- metadata:
- name: pod-sriov
- annotations:
- networks: '[
- { "name": "flannel-conf" },
- { "name": "sriov-conf" }
- ]'
- spec: # specification of the pod's contents
- containers:
- - name: pod-sriov
- image: "busybox"
- command: ["top"]
- stdin: true
- tty: true
+ .. code-block:: yaml
+
+ apiVersion: v1
+ kind: Pod
+ metadata:
+ name: pod-sriov
+ annotations:
+ networks: '[
+ { "name": "flannel-conf" },
+ { "name": "sriov-conf" }
+ ]'
+ spec: # specification of the pod's contents
+ containers:
+ - name: pod-sriov
+ image: "busybox"
+ command: ["top"]
+ stdin: true
+ tty: true
2, Create Pod
@@ -301,25 +314,27 @@ Verifying Pod Network
=====================
::
- # kubectl exec pod-sriov -- ip a
- 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
- link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
- inet 127.0.0.1/8 scope host lo
- valid_lft forever preferred_lft forever
- inet6 ::1/128 scope host
- valid_lft forever preferred_lft forever
- 3: eth0@if124: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue
- link/ether 0a:58:0a:e9:40:2a brd ff:ff:ff:ff:ff:ff
- inet 10.233.64.42/24 scope global eth0
- valid_lft forever preferred_lft forever
- inet6 fe80::8e6:32ff:fed3:7645/64 scope link
- valid_lft forever preferred_lft forever
- 4: net0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
- link/ether 52:54:00:d4:d2:e5 brd ff:ff:ff:ff:ff:ff
- inet 192.168.123.2/24 scope global net0
- valid_lft forever preferred_lft forever
- inet6 fe80::5054:ff:fed4:d2e5/64 scope link
- valid_lft forever preferred_lft forever
+ .. code-block:: bash
+
+ # kubectl exec pod-sriov -- ip a
+ 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
+ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
+ inet 127.0.0.1/8 scope host lo
+ valid_lft forever preferred_lft forever
+ inet6 ::1/128 scope host
+ valid_lft forever preferred_lft forever
+ 3: eth0@if124: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue
+ link/ether 0a:58:0a:e9:40:2a brd ff:ff:ff:ff:ff:ff
+ inet 10.233.64.42/24 scope global eth0
+ valid_lft forever preferred_lft forever
+ inet6 fe80::8e6:32ff:fed3:7645/64 scope link
+ valid_lft forever preferred_lft forever
+ 4: net0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
+ link/ether 52:54:00:d4:d2:e5 brd ff:ff:ff:ff:ff:ff
+ inet 192.168.123.2/24 scope global net0
+ valid_lft forever preferred_lft forever
+ inet6 fe80::5054:ff:fed4:d2e5/64 scope link
+ valid_lft forever preferred_lft forever
Contacts
========
diff --git a/docs/arm/multi_flannel_intfs_deployment.rst b/docs/arm/multi_flannel_intfs_deployment.rst
index 07c8ad7..65e643d 100644
--- a/docs/arm/multi_flannel_intfs_deployment.rst
+++ b/docs/arm/multi_flannel_intfs_deployment.rst
@@ -55,10 +55,10 @@ which uses Flannel as the networking backend. The related Flannel deployment fil
image to start the Flannel service.
.. image:: images/multi_flannel_intfs.PNG
- :alt: 2 Flannel interfaces deployment scenario
- :figclass: align-center
+ :width: 800px
+ :alt: 2 Flannel interfaces deployment scenario
- Fig 1. Multiple Flannel interfaces deployment architecture
+Fig 1. Multiple Flannel interfaces deployment architecture
.. _Etcd: https://coreos.com/etcd/
@@ -84,7 +84,7 @@ kube-flannel.yml. Here we give a revised version of this yaml file to start 2 Fl
.. include:: files/kube-2flannels.yml
:literal:
- kube-2flannels.yml
+kube-2flannels.yml
ConfigMap Added
@@ -94,14 +94,16 @@ To start the 2nd Flannel container process, we add a new ConfigMap named kube-fl
includes a new net-conf.json from the 1st:
::
- net-conf.json: |
- {
- "Network": "10.3.0.0/16",
- "Backend": {
- "Type": "udp",
- "Port": 8286
+ .. code-block:: json
+
+ net-conf.json: |
+ {
+ "Network": "10.3.0.0/16",
+ "Backend": {
+ "Type": "udp",
+ "Port": 8286
+ }
}
- }
2nd Flannel Container Added
@@ -112,20 +114,24 @@ The default Flanneld's UDP listen port is 8285, we set the 2nd Flanneld to liste
For the 2nd Flannel container, we use the command as:
::
- - name: kube-flannel2
- image: quay.io/coreos/flannel:v0.8.0-arm64
- command: [ "/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr", "--subnet-file=/run/flannel/subnet2.env" ]
+ .. code-block:: yaml
+
+ - name: kube-flannel2
+ image: quay.io/coreos/flannel:v0.8.0-arm64
+ command: [ "/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr", "--subnet-file=/run/flannel/subnet2.env" ]
which outputs the subnet file to /run/flannel/subnet2.env for the 2nd Flannel CNI to use.
And mount the 2nd Flannel ConfigMap to /etc/kube-flannel/ for the 2nd Flanneld container process:
::
- volumeMounts:
- - name: run
- mountPath: /run
- - name: flannel-cfg2
- mountPath: /etc/kube-flannel/
+ .. code-block:: yaml
+
+ volumeMounts:
+ - name: run
+ mountPath: /run
+ - name: flannel-cfg2
+ mountPath: /etc/kube-flannel/
CNI Configuration
@@ -148,33 +154,35 @@ The following CNI configuration sample for 2 Flannel interfaces is located in /e
as 10-2flannels.conf:
::
- {
- "name": "flannel-networks",
- "type": "multus",
- "delegates": [
- {
- "type": "flannel",
- "name": "flannel.2",
- "subnetFile": "/run/flannel/subnet2.env",
- "dataDir": "/var/lib/cni/flannel/2",
- "delegate": {
- "bridge": "kbr1",
- "isDefaultGateway": false
- }
- },
- {
- "type": "flannel",
- "name": "flannel.1",
- "subnetFile": "/run/flannel/subnet.env",
- "dataDir": "/var/lib/cni/flannel",
- "masterplugin": true,
- "delegate": {
- "bridge": "kbr0",
- "isDefaultGateway": true
- }
- }
- ]
- }
+ .. code-block:: json
+
+ {
+ "name": "flannel-networks",
+ "type": "multus",
+ "delegates": [
+ {
+ "type": "flannel",
+ "name": "flannel.2",
+ "subnetFile": "/run/flannel/subnet2.env",
+ "dataDir": "/var/lib/cni/flannel/2",
+ "delegate": {
+ "bridge": "kbr1",
+ "isDefaultGateway": false
+ }
+ },
+ {
+ "type": "flannel",
+ "name": "flannel.1",
+ "subnetFile": "/run/flannel/subnet.env",
+ "dataDir": "/var/lib/cni/flannel",
+ "masterplugin": true,
+ "delegate": {
+ "bridge": "kbr0",
+ "isDefaultGateway": true
+ }
+ }
+ ]
+ }
For the 2nd Flannel CNI, it will use the subnet file /run/flannel/subnet2.env instead of the default /run/flannel/subnet.env,
which is generated by the 2nd Flanneld process, and the subnet data would be output to the directory:
@@ -228,32 +236,36 @@ kube-flannel.yml. For Flanneld to use the etcd backend, we could change the cont
backend:
::
- ...
- containers:
- - name: kube-flannel
- image: quay.io/coreos/flannel:v0.8.0-arm64
- command: [ "/opt/bin/flanneld", "--ip-masq", "--etcd-endpoints=http://ETCD_CLUSTER_IP1:2379", "--etcd-prefix=/coreos.com/network" ]
- securityContext:
- privileged: true
- env:
- - name: POD_NAME
- valueFrom:
- fieldRef:
- fieldPath: metadata.name
- - name: POD_NAMESPACE
- valueFrom:
- fieldRef:
- fieldPath: metadata.namespace
- volumeMounts:
- - name: run
- mountPath: /run
- - name: flannel-cfg
- mountPath: /etc/kube-flannel/
+ .. code-block:: yaml
+
+ ...
+ containers:
+ - name: kube-flannel
+ image: quay.io/coreos/flannel:v0.8.0-arm64
+ command: [ "/opt/bin/flanneld", "--ip-masq", "--etcd-endpoints=http://ETCD_CLUSTER_IP1:2379", "--etcd-prefix=/coreos.com/network" ]
+ securityContext:
+ privileged: true
+ env:
+ - name: POD_NAME
+ valueFrom:
+ fieldRef:
+ fieldPath: metadata.name
+ - name: POD_NAMESPACE
+ valueFrom:
+ fieldRef:
+ fieldPath: metadata.namespace
+ volumeMounts:
+ - name: run
+ mountPath: /run
+ - name: flannel-cfg
+ mountPath: /etc/kube-flannel/
Here as we don't use the "--kube-subnet-mgr" option, the last 2 lines of
::
- - name: flannel-cfg
+ .. code-block:: yaml
+
+ - name: flannel-cfg
mountPath: /etc/kube-flannel/
can be ignored.
@@ -262,24 +274,26 @@ To start the 2nd Flanneld process, we can add the 2nd Flanneld container section
the 1st Flanneld container:
::
- containers:
- - name: kube-flannel2
- image: quay.io/coreos/flannel:v0.8.0-arm64
- command: [ "/opt/bin/flanneld", "--ip-masq", "--etcd-endpoints=http://ETCD_CLUSTER_IP1:2379", "--etcd-prefix=/coreos.com/network2", "--subnet-file=/run/flannel/subnet2.env" ]
- securityContext:
- privileged: true
- env:
- - name: POD_NAME
- valueFrom:
- fieldRef:
- fieldPath: metadata.name
- - name: POD_NAMESPACE
- valueFrom:
- fieldRef:
- fieldPath: metadata.namespace
- volumeMounts:
- - name: run
- mountPath: /run
+ .. code-block:: yaml
+
+ containers:
+ - name: kube-flannel2
+ image: quay.io/coreos/flannel:v0.8.0-arm64
+ command: [ "/opt/bin/flanneld", "--ip-masq", "--etcd-endpoints=http://ETCD_CLUSTER_IP1:2379", "--etcd-prefix=/coreos.com/network2", "--subnet-file=/run/flannel/subnet2.env" ]
+ securityContext:
+ privileged: true
+ env:
+ - name: POD_NAME
+ valueFrom:
+ fieldRef:
+ fieldPath: metadata.name
+ - name: POD_NAMESPACE
+ valueFrom:
+ fieldRef:
+ fieldPath: metadata.namespace
+ volumeMounts:
+ - name: run
+ mountPath: /run
The option "-subnet-file" for the 2nd Flanneld is to output a subnet file for the 2nd Flannel subnet configuration
of the Flannel CNI which is called by Multus CNI.