summaryrefslogtreecommitdiffstats
path: root/docs/arm
diff options
context:
space:
mode:
Diffstat (limited to 'docs/arm')
-rw-r--r--docs/arm/data_plane_dpdk_deployment.rst56
-rw-r--r--docs/arm/data_plane_sriov_pf_deployment.rst267
-rw-r--r--docs/arm/multi_flannel_intfs_deployment.rst186
3 files changed, 270 insertions, 239 deletions
diff --git a/docs/arm/data_plane_dpdk_deployment.rst b/docs/arm/data_plane_dpdk_deployment.rst
index 1f6317c..72e4011 100644
--- a/docs/arm/data_plane_dpdk_deployment.rst
+++ b/docs/arm/data_plane_dpdk_deployment.rst
@@ -114,33 +114,35 @@ Configuring Pod with Control Plane and Data Plane with DPDK Acceration
1, Save the below following YAML to dpdk.yaml.
::
- apiVersion: v1
- kind: Pod
- metadata:
- name: dpdk
- spec:
- nodeSelector:
- beta.kubernetes.io/arch: arm64
- containers:
- - name: dpdk
- image: younglook/dpdk:arm64
- command: [ "bash", "-c", "/usr/bin/l2fwd --huge-unlink -l 6-7 -n 4 --file-prefix=container -- -p 3" ]
- stdin: true
- tty: true
- securityContext:
- privileged: true
- volumeMounts:
- - mountPath: /dev/vfio
- name: vfio
- - mountPath: /mnt/huge
- name: huge
- volumes:
- - name: vfio
- hostPath:
- path: /dev/vfio
- - name: huge
- hostPath:
- path: /mnt/huge
+ .. code-block:: yaml
+
+ apiVersion: v1
+ kind: Pod
+ metadata:
+ name: dpdk
+ spec:
+ nodeSelector:
+ beta.kubernetes.io/arch: arm64
+ containers:
+ - name: dpdk
+ image: younglook/dpdk:arm64
+ command: [ "bash", "-c", "/usr/bin/l2fwd --huge-unlink -l 6-7 -n 4 --file-prefix=container -- -p 3" ]
+ stdin: true
+ tty: true
+ securityContext:
+ privileged: true
+ volumeMounts:
+ - mountPath: /dev/vfio
+ name: vfio
+ - mountPath: /mnt/huge
+ name: huge
+ volumes:
+ - name: vfio
+ hostPath:
+ path: /dev/vfio
+ - name: huge
+ hostPath:
+ path: /mnt/huge
2, Create Pod
diff --git a/docs/arm/data_plane_sriov_pf_deployment.rst b/docs/arm/data_plane_sriov_pf_deployment.rst
index 7cbd4d7..946d81f 100644
--- a/docs/arm/data_plane_sriov_pf_deployment.rst
+++ b/docs/arm/data_plane_sriov_pf_deployment.rst
@@ -16,14 +16,14 @@ This document gives a brief introduction on how to deploy SRIOV CNI with PF mode
Introduction
============
-.. _sriov_cni: https://github.com/hustcat/sriov-cni
-.. _Flannel: https://github.com/coreos/flannel
-.. _Multus: https://github.com/Intel-Corp/multus-cni
-.. _cni: https://github.com/containernetworking/cni
-.. _kubeadm: https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/
-.. _k8s-crd: https://kubernetes.io/docs/concepts/api-extension/custom-resources/
-.. _arm64: https://github.com/kubernetes/website/pull/6511
-.. _files: https://github.com/kubernetes/website/pull/6511/files
+.. _sriov_cni: https://github.com/hustcat/sriov-cni
+.. _Flannel: https://github.com/coreos/flannel
+.. _Multus: https://github.com/Intel-Corp/multus-cni
+.. _cni-description: https://github.com/containernetworking/cni
+.. _kubeadm: https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/
+.. _k8s-crd: https://kubernetes.io/docs/concepts/api-extension/custom-resources/
+.. _arm64: https://github.com/kubernetes/website/pull/6511
+.. _files: https://github.com/kubernetes/website/pull/6511/files
As we know, in some cases we need to deploy multiple network interfaces
@@ -79,20 +79,22 @@ Please make sure that rbac was added for Kubernetes cluster.
here we name it as rbac.yaml:
::
- apiVersion: rbac.authorization.k8s.io/v1beta1
- kind: ClusterRoleBinding
- metadata:
- name: fabric8-rbac
- subjects:
- - kind: ServiceAccount
- # Reference to upper's `metadata.name`
- name: default
- # Reference to upper's `metadata.namespace`
- namespace: default
- roleRef:
- kind: ClusterRole
- name: cluster-admin
- apiGroup: rbac.authorization.k8s.io
+ .. code-block:: yaml
+
+ apiVersion: rbac.authorization.k8s.io/v1beta1
+ kind: ClusterRoleBinding
+ metadata:
+ name: fabric8-rbac
+ subjects:
+ - kind: ServiceAccount
+ # Reference to upper's `metadata.name`
+ name: default
+ # Reference to upper's `metadata.namespace`
+ namespace: default
+ roleRef:
+ kind: ClusterRole
+ name: cluster-admin
+ apiGroup: rbac.authorization.k8s.io
command:
@@ -105,28 +107,30 @@ Please make sure that CRD was added for Kubernetes cluster.
Here we name it as crdnetwork.yaml:
::
- apiVersion: apiextensions.k8s.io/v1beta1
- kind: CustomResourceDefinition
- metadata:
- # name must match the spec fields below, and be in the form: <plural>.<group>
- name: networks.kubernetes.com
- spec:
- # group name to use for REST API: /apis/<group>/<version>
- group: kubernetes.com
- # version name to use for REST API: /apis/<group>/<version>
- version: v1
- # either Namespaced or Cluster
- scope: Namespaced
- names:
- # plural name to be used in the URL: /apis/<group>/<version>/<plural>
- plural: networks
- # singular name to be used as an alias on the CLI and for display
- singular: network
- # kind is normally the CamelCased singular type. Your resource manifests use this.
- kind: Network
- # shortNames allow shorter string to match your resource on the CLI
- shortNames:
- - net
+ .. code-block:: yaml
+
+ apiVersion: apiextensions.k8s.io/v1beta1
+ kind: CustomResourceDefinition
+ metadata:
+ # name must match the spec fields below, and be in the form: <plural>.<group>
+ name: networks.kubernetes.com
+ spec:
+ # group name to use for REST API: /apis/<group>/<version>
+ group: kubernetes.com
+ # version name to use for REST API: /apis/<group>/<version>
+ version: v1
+ # either Namespaced or Cluster
+ scope: Namespaced
+ names:
+ # plural name to be used in the URL: /apis/<group>/<version>/<plural>
+ plural: networks
+ # singular name to be used as an alias on the CLI and for display
+ singular: network
+ # kind is normally the CamelCased singular type. Your resource manifests use this.
+ kind: Network
+ # shortNames allow shorter string to match your resource on the CLI
+ shortNames:
+ - net
command:
@@ -139,19 +143,21 @@ Create flannel network as control plane.
Here we name it as flannel-network.yaml:
::
- apiVersion: "kubernetes.com/v1"
- kind: Network
- metadata:
- name: flannel-conf
- plugin: flannel
- args: '[
- {
- "masterplugin": true,
- "delegate": {
- "isDefaultGateway": true
- }
- }
- ]'
+ .. code-block:: yaml
+
+ apiVersion: "kubernetes.com/v1"
+ kind: Network
+ metadata:
+ name: flannel-conf
+ plugin: flannel
+ args: '[
+ {
+ "masterplugin": true,
+ "delegate": {
+ "isDefaultGateway": true
+ }
+ }
+ ]'
command:
@@ -164,27 +170,29 @@ Create sriov network with PF mode as data plane.
Here we name it as sriov-network.yaml:
::
- apiVersion: "kubernetes.com/v1"
- kind: Network
- metadata:
- name: sriov-conf
- plugin: sriov
- args: '[
- {
- "master": "eth1",
- "pfOnly": true,
- "ipam": {
- "type": "host-local",
- "subnet": "192.168.123.0/24",
- "rangeStart": "192.168.123.2",
- "rangeEnd": "192.168.123.10",
- "routes": [
- { "dst": "0.0.0.0/0" }
- ],
- "gateway": "192.168.123.1"
- }
- }
- ]'
+ .. code-block:: yaml
+
+ apiVersion: "kubernetes.com/v1"
+ kind: Network
+ metadata:
+ name: sriov-conf
+ plugin: sriov
+ args: '[
+ {
+ "master": "eth1",
+ "pfOnly": true,
+ "ipam": {
+ "type": "host-local",
+ "subnet": "192.168.123.0/24",
+ "rangeStart": "192.168.123.2",
+ "rangeEnd": "192.168.123.10",
+ "routes": [
+ { "dst": "0.0.0.0/0" }
+ ],
+ "gateway": "192.168.123.1"
+ }
+ }
+ ]'
command:
@@ -194,8 +202,8 @@ command:
CNI Installation
================
.. _CNI: https://github.com/containernetworking/plugins
-Firstly, we should deploy all CNI plugins. The build process is following:
+Firstly, we should deploy all CNI plugins. The build process is following:
::
git clone https://github.com/containernetworking/plugins.git
@@ -219,6 +227,7 @@ we should put the Multus CNI binary to /opt/cni/bin/ where the Flannel CNI and S
CNIs are put.
.. _SRIOV: https://github.com/hustcat/sriov-cni
+
The build process of it is as:
::
@@ -233,18 +242,20 @@ The following multus CNI configuration is located in /etc/cni/net.d/, here we na
as multus-cni.conf:
::
- {
- "name": "minion-cni-network",
- "type": "multus",
- "kubeconfig": "/etc/kubernetes/admin.conf",
- "delegates": [{
- "type": "flannel",
- "masterplugin": true,
- "delegate": {
- "isDefaultGateway": true
- }
- }]
- }
+ .. code-block:: json
+
+ {
+ "name": "minion-cni-network",
+ "type": "multus",
+ "kubeconfig": "/etc/kubernetes/admin.conf",
+ "delegates": [{
+ "type": "flannel",
+ "masterplugin": true,
+ "delegate": {
+ "isDefaultGateway": true
+ }
+ }]
+ }
command:
@@ -267,22 +278,24 @@ Configuring Pod with Control Plane and Data Plane
In this case flannle-conf network object act as the primary network.
::
- apiVersion: v1
- kind: Pod
- metadata:
- name: pod-sriov
- annotations:
- networks: '[
- { "name": "flannel-conf" },
- { "name": "sriov-conf" }
- ]'
- spec: # specification of the pod's contents
- containers:
- - name: pod-sriov
- image: "busybox"
- command: ["top"]
- stdin: true
- tty: true
+ .. code-block:: yaml
+
+ apiVersion: v1
+ kind: Pod
+ metadata:
+ name: pod-sriov
+ annotations:
+ networks: '[
+ { "name": "flannel-conf" },
+ { "name": "sriov-conf" }
+ ]'
+ spec: # specification of the pod's contents
+ containers:
+ - name: pod-sriov
+ image: "busybox"
+ command: ["top"]
+ stdin: true
+ tty: true
2, Create Pod
@@ -301,25 +314,27 @@ Verifying Pod Network
=====================
::
- # kubectl exec pod-sriov -- ip a
- 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
- link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
- inet 127.0.0.1/8 scope host lo
- valid_lft forever preferred_lft forever
- inet6 ::1/128 scope host
- valid_lft forever preferred_lft forever
- 3: eth0@if124: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue
- link/ether 0a:58:0a:e9:40:2a brd ff:ff:ff:ff:ff:ff
- inet 10.233.64.42/24 scope global eth0
- valid_lft forever preferred_lft forever
- inet6 fe80::8e6:32ff:fed3:7645/64 scope link
- valid_lft forever preferred_lft forever
- 4: net0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
- link/ether 52:54:00:d4:d2:e5 brd ff:ff:ff:ff:ff:ff
- inet 192.168.123.2/24 scope global net0
- valid_lft forever preferred_lft forever
- inet6 fe80::5054:ff:fed4:d2e5/64 scope link
- valid_lft forever preferred_lft forever
+ .. code-block:: bash
+
+ # kubectl exec pod-sriov -- ip a
+ 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
+ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
+ inet 127.0.0.1/8 scope host lo
+ valid_lft forever preferred_lft forever
+ inet6 ::1/128 scope host
+ valid_lft forever preferred_lft forever
+ 3: eth0@if124: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue
+ link/ether 0a:58:0a:e9:40:2a brd ff:ff:ff:ff:ff:ff
+ inet 10.233.64.42/24 scope global eth0
+ valid_lft forever preferred_lft forever
+ inet6 fe80::8e6:32ff:fed3:7645/64 scope link
+ valid_lft forever preferred_lft forever
+ 4: net0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
+ link/ether 52:54:00:d4:d2:e5 brd ff:ff:ff:ff:ff:ff
+ inet 192.168.123.2/24 scope global net0
+ valid_lft forever preferred_lft forever
+ inet6 fe80::5054:ff:fed4:d2e5/64 scope link
+ valid_lft forever preferred_lft forever
Contacts
========
diff --git a/docs/arm/multi_flannel_intfs_deployment.rst b/docs/arm/multi_flannel_intfs_deployment.rst
index 07c8ad7..65e643d 100644
--- a/docs/arm/multi_flannel_intfs_deployment.rst
+++ b/docs/arm/multi_flannel_intfs_deployment.rst
@@ -55,10 +55,10 @@ which uses Flannel as the networking backend. The related Flannel deployment fil
image to start the Flannel service.
.. image:: images/multi_flannel_intfs.PNG
- :alt: 2 Flannel interfaces deployment scenario
- :figclass: align-center
+ :width: 800px
+ :alt: 2 Flannel interfaces deployment scenario
- Fig 1. Multiple Flannel interfaces deployment architecture
+Fig 1. Multiple Flannel interfaces deployment architecture
.. _Etcd: https://coreos.com/etcd/
@@ -84,7 +84,7 @@ kube-flannel.yml. Here we give a revised version of this yaml file to start 2 Fl
.. include:: files/kube-2flannels.yml
:literal:
- kube-2flannels.yml
+kube-2flannels.yml
ConfigMap Added
@@ -94,14 +94,16 @@ To start the 2nd Flannel container process, we add a new ConfigMap named kube-fl
includes a new net-conf.json from the 1st:
::
- net-conf.json: |
- {
- "Network": "10.3.0.0/16",
- "Backend": {
- "Type": "udp",
- "Port": 8286
+ .. code-block:: json
+
+ net-conf.json: |
+ {
+ "Network": "10.3.0.0/16",
+ "Backend": {
+ "Type": "udp",
+ "Port": 8286
+ }
}
- }
2nd Flannel Container Added
@@ -112,20 +114,24 @@ The default Flanneld's UDP listen port is 8285, we set the 2nd Flanneld to liste
For the 2nd Flannel container, we use the command as:
::
- - name: kube-flannel2
- image: quay.io/coreos/flannel:v0.8.0-arm64
- command: [ "/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr", "--subnet-file=/run/flannel/subnet2.env" ]
+ .. code-block:: yaml
+
+ - name: kube-flannel2
+ image: quay.io/coreos/flannel:v0.8.0-arm64
+ command: [ "/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr", "--subnet-file=/run/flannel/subnet2.env" ]
which outputs the subnet file to /run/flannel/subnet2.env for the 2nd Flannel CNI to use.
And mount the 2nd Flannel ConfigMap to /etc/kube-flannel/ for the 2nd Flanneld container process:
::
- volumeMounts:
- - name: run
- mountPath: /run
- - name: flannel-cfg2
- mountPath: /etc/kube-flannel/
+ .. code-block:: yaml
+
+ volumeMounts:
+ - name: run
+ mountPath: /run
+ - name: flannel-cfg2
+ mountPath: /etc/kube-flannel/
CNI Configuration
@@ -148,33 +154,35 @@ The following CNI configuration sample for 2 Flannel interfaces is located in /e
as 10-2flannels.conf:
::
- {
- "name": "flannel-networks",
- "type": "multus",
- "delegates": [
- {
- "type": "flannel",
- "name": "flannel.2",
- "subnetFile": "/run/flannel/subnet2.env",
- "dataDir": "/var/lib/cni/flannel/2",
- "delegate": {
- "bridge": "kbr1",
- "isDefaultGateway": false
- }
- },
- {
- "type": "flannel",
- "name": "flannel.1",
- "subnetFile": "/run/flannel/subnet.env",
- "dataDir": "/var/lib/cni/flannel",
- "masterplugin": true,
- "delegate": {
- "bridge": "kbr0",
- "isDefaultGateway": true
- }
- }
- ]
- }
+ .. code-block:: json
+
+ {
+ "name": "flannel-networks",
+ "type": "multus",
+ "delegates": [
+ {
+ "type": "flannel",
+ "name": "flannel.2",
+ "subnetFile": "/run/flannel/subnet2.env",
+ "dataDir": "/var/lib/cni/flannel/2",
+ "delegate": {
+ "bridge": "kbr1",
+ "isDefaultGateway": false
+ }
+ },
+ {
+ "type": "flannel",
+ "name": "flannel.1",
+ "subnetFile": "/run/flannel/subnet.env",
+ "dataDir": "/var/lib/cni/flannel",
+ "masterplugin": true,
+ "delegate": {
+ "bridge": "kbr0",
+ "isDefaultGateway": true
+ }
+ }
+ ]
+ }
For the 2nd Flannel CNI, it will use the subnet file /run/flannel/subnet2.env instead of the default /run/flannel/subnet.env,
which is generated by the 2nd Flanneld process, and the subnet data would be output to the directory:
@@ -228,32 +236,36 @@ kube-flannel.yml. For Flanneld to use the etcd backend, we could change the cont
backend:
::
- ...
- containers:
- - name: kube-flannel
- image: quay.io/coreos/flannel:v0.8.0-arm64
- command: [ "/opt/bin/flanneld", "--ip-masq", "--etcd-endpoints=http://ETCD_CLUSTER_IP1:2379", "--etcd-prefix=/coreos.com/network" ]
- securityContext:
- privileged: true
- env:
- - name: POD_NAME
- valueFrom:
- fieldRef:
- fieldPath: metadata.name
- - name: POD_NAMESPACE
- valueFrom:
- fieldRef:
- fieldPath: metadata.namespace
- volumeMounts:
- - name: run
- mountPath: /run
- - name: flannel-cfg
- mountPath: /etc/kube-flannel/
+ .. code-block:: yaml
+
+ ...
+ containers:
+ - name: kube-flannel
+ image: quay.io/coreos/flannel:v0.8.0-arm64
+ command: [ "/opt/bin/flanneld", "--ip-masq", "--etcd-endpoints=http://ETCD_CLUSTER_IP1:2379", "--etcd-prefix=/coreos.com/network" ]
+ securityContext:
+ privileged: true
+ env:
+ - name: POD_NAME
+ valueFrom:
+ fieldRef:
+ fieldPath: metadata.name
+ - name: POD_NAMESPACE
+ valueFrom:
+ fieldRef:
+ fieldPath: metadata.namespace
+ volumeMounts:
+ - name: run
+ mountPath: /run
+ - name: flannel-cfg
+ mountPath: /etc/kube-flannel/
Here as we don't use the "--kube-subnet-mgr" option, the last 2 lines of
::
- - name: flannel-cfg
+ .. code-block:: yaml
+
+ - name: flannel-cfg
mountPath: /etc/kube-flannel/
can be ignored.
@@ -262,24 +274,26 @@ To start the 2nd Flanneld process, we can add the 2nd Flanneld container section
the 1st Flanneld container:
::
- containers:
- - name: kube-flannel2
- image: quay.io/coreos/flannel:v0.8.0-arm64
- command: [ "/opt/bin/flanneld", "--ip-masq", "--etcd-endpoints=http://ETCD_CLUSTER_IP1:2379", "--etcd-prefix=/coreos.com/network2", "--subnet-file=/run/flannel/subnet2.env" ]
- securityContext:
- privileged: true
- env:
- - name: POD_NAME
- valueFrom:
- fieldRef:
- fieldPath: metadata.name
- - name: POD_NAMESPACE
- valueFrom:
- fieldRef:
- fieldPath: metadata.namespace
- volumeMounts:
- - name: run
- mountPath: /run
+ .. code-block:: yaml
+
+ containers:
+ - name: kube-flannel2
+ image: quay.io/coreos/flannel:v0.8.0-arm64
+ command: [ "/opt/bin/flanneld", "--ip-masq", "--etcd-endpoints=http://ETCD_CLUSTER_IP1:2379", "--etcd-prefix=/coreos.com/network2", "--subnet-file=/run/flannel/subnet2.env" ]
+ securityContext:
+ privileged: true
+ env:
+ - name: POD_NAME
+ valueFrom:
+ fieldRef:
+ fieldPath: metadata.name
+ - name: POD_NAMESPACE
+ valueFrom:
+ fieldRef:
+ fieldPath: metadata.namespace
+ volumeMounts:
+ - name: run
+ mountPath: /run
The option "-subnet-file" for the 2nd Flanneld is to output a subnet file for the 2nd Flannel subnet configuration
of the Flannel CNI which is called by Multus CNI.