aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorKuralamudhan Ramakrishan <kuralamudhan.ramakrishnan@intel.com>2020-03-28 03:56:45 +0000
committerKuralamudhan Ramakrishnan <kuralamudhan.ramakrishnan@intel.com>2020-09-18 12:04:25 -0700
commit13ccea1a14dd12e585ef34680fcbcab5fd17550b (patch)
tree98289feb7e9b2d62edd5c7138ca156436a8eda2d
parent2efea1a12076d2ca11dedfd932fdfc0893e1d288 (diff)
adding the example folder
- adding vlan and provider network Signed-off-by: Kuralamudhan Ramakrishnan <kuralamudhan.ramakrishnan@intel.com> Change-Id: Ic97c61b38b51a7391a4d4b1c2e918c6373bff3b4
-rw-r--r--example/README.md199
-rw-r--r--example/images/direct-provider-networking.pngbin0 -> 31119 bytes
-rw-r--r--example/images/vlan-tagging.pngbin0 -> 31521 bytes
-rw-r--r--example/ovn4nfv-k8s-plugin-daemonset.yml555
-rw-r--r--example/ovn4nfv_direct_pn.yml79
-rw-r--r--example/ovn4nfv_vlan_pn.yml82
6 files changed, 915 insertions, 0 deletions
diff --git a/example/README.md b/example/README.md
new file mode 100644
index 0000000..ff52bad
--- /dev/null
+++ b/example/README.md
@@ -0,0 +1,199 @@
+# Example Setup and Testing
+
+In this `./example` folder, OVN4NFV-plugin daemonset yaml file, VLAN and direct Provider networking testing scenarios and required sample
+configuration file.
+
+# Quick start
+
+## Creating sandbox environment
+
+Create 2 VMs in your setup. The recommended way of creating the sandbox is through KUD. Please follow the all-in-one setup in KUD. This
+will create two VMs and provide the required sandbox.
+
+## VLAN Tagging Provider network testing
+
+The following setup have 2 VMs with one VM having Kubernetes setup with OVN4NFVk8s plugin and another VM act as provider networking to do
+testing.
+
+Run the following yaml file to test teh vlan tagging provider networking. User required to change the `providerInterfaceName` and
+`nodeLabelList` in the `ovn4nfv_vlan_pn.yml`
+
+```
+kubectl apply -f ovn4nfv_vlan_pn.yml
+```
+This create Vlan tagging interface eth0.100 in VM1 and two pods for the deployment `pnw-original-vlan-1` and `pnw-original-vlan-2` in VM.
+Test the interface details and inter network communication between `net0` interfaces
+```
+# kubectl exec -it pnw-original-vlan-1-6c67574cd7-mv57g -- ifconfig
+eth0 Link encap:Ethernet HWaddr 0A:58:0A:F4:40:30
+ inet addr:10.244.64.48 Bcast:0.0.0.0 Mask:255.255.255.0
+ UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
+ RX packets:11 errors:0 dropped:0 overruns:0 frame:0
+ TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
+ collisions:0 txqueuelen:0
+ RX bytes:462 (462.0 B) TX bytes:0 (0.0 B)
+
+lo Link encap:Local Loopback
+ inet addr:127.0.0.1 Mask:255.0.0.0
+ UP LOOPBACK RUNNING MTU:65536 Metric:1
+ RX packets:0 errors:0 dropped:0 overruns:0 frame:0
+ TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
+ collisions:0 txqueuelen:1000
+ RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
+
+net0 Link encap:Ethernet HWaddr 0A:00:00:00:00:3C
+ inet addr:172.16.33.3 Bcast:172.16.33.255 Mask:255.255.255.0
+ UP BROADCAST RUNNING MULTICAST MTU:1400 Metric:1
+ RX packets:10 errors:0 dropped:0 overruns:0 frame:0
+ TX packets:9 errors:0 dropped:0 overruns:0 carrier:0
+ collisions:0 txqueuelen:0
+ RX bytes:868 (868.0 B) TX bytes:826 (826.0 B)
+# kubectl exec -it pnw-original-vlan-2-5bd9ffbf5c-4gcgq -- ifconfig
+eth0 Link encap:Ethernet HWaddr 0A:58:0A:F4:40:31
+ inet addr:10.244.64.49 Bcast:0.0.0.0 Mask:255.255.255.0
+ UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
+ RX packets:11 errors:0 dropped:0 overruns:0 frame:0
+ TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
+ collisions:0 txqueuelen:0
+ RX bytes:462 (462.0 B) TX bytes:0 (0.0 B)
+
+lo Link encap:Local Loopback
+ inet addr:127.0.0.1 Mask:255.0.0.0
+ UP LOOPBACK RUNNING MTU:65536 Metric:1
+ RX packets:0 errors:0 dropped:0 overruns:0 frame:0
+ TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
+ collisions:0 txqueuelen:1000
+ RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
+
+net0 Link encap:Ethernet HWaddr 0A:00:00:00:00:3D
+ inet addr:172.16.33.4 Bcast:172.16.33.255 Mask:255.255.255.0
+ UP BROADCAST RUNNING MULTICAST MTU:1400 Metric:1
+ RX packets:25 errors:0 dropped:0 overruns:0 frame:0
+ TX packets:25 errors:0 dropped:0 overruns:0 carrier:0
+ collisions:0 txqueuelen:0
+ RX bytes:2282 (2.2 KiB) TX bytes:2282 (2.2 KiB)
+```
+Test the ping operation between the vlan interfaces
+```
+# kubectl exec -it pnw-original-vlan-2-5bd9ffbf5c-4gcgq -- ping -I net0 172.16.33.3 -c 2
+PING 172.16.33.3 (172.16.33.3): 56 data bytes
+64 bytes from 172.16.33.3: seq=0 ttl=64 time=0.092 ms
+64 bytes from 172.16.33.3: seq=1 ttl=64 time=0.105 ms
+
+--- 172.16.33.3 ping statistics ---
+2 packets transmitted, 2 packets received, 0% packet loss
+round-trip min/avg/max = 0.092/0.098/0.105 ms
+```
+In VM2 create a Vlan tagging for eth0 as eth0.100 and configure the IP address as
+```
+# ifconfig eth0.100
+eth0.100: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
+ inet 172.16.33.2 netmask 255.255.255.0 broadcast 172.16.33.255
+ ether 52:54:00:f4:ee:d9 txqueuelen 1000 (Ethernet)
+ RX packets 111 bytes 8092 (8.0 KB)
+ RX errors 0 dropped 0 overruns 0 frame 0
+ TX packets 149 bytes 12698 (12.6 KB)
+ TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
+```
+Pinging from VM2 through eth0.100 to pod 1 in VM1 should be successfull to test the VLAN tagging
+```
+# ping -I eth0.100 172.16.33.3 -c 2
+PING 172.16.33.3 (172.16.33.3) from 172.16.33.2 eth0.100: 56(84) bytes of data.
+64 bytes from 172.16.33.3: icmp_seq=1 ttl=64 time=0.382 ms
+64 bytes from 172.16.33.3: icmp_seq=2 ttl=64 time=0.347 ms
+
+--- 172.16.33.3 ping statistics ---
+2 packets transmitted, 2 received, 0% packet loss, time 1009ms
+rtt min/avg/max/mdev = 0.347/0.364/0.382/0.025 ms
+```
+## VLAN Tagging between VMs
+![vlan tagging testing](images/vlan-tagging.png)
+
+# Direct Provider network testing
+
+The main difference between Vlan tagging and Direct provider networking is that VLAN logical interface is created and then ports are
+attached to it. In order to validate the direct provider networking connectivity, we create VLAN tagging between VM1 & VM2 and test the
+connectivity as follow.
+
+Create VLAN tagging interface eth0.101 in VM1 and VM2. Just add `providerInterfaceName: eth0.101' in Direct provider network CR.
+```
+# kubectl apply -f ovn4nfv_direct_pn.yml
+```
+Check the inter connection between direct provider network pods as follow
+```
+# kubectl exec -it pnw-original-direct-1-85f5b45fdd-qq6xc -- ifconfig
+eth0 Link encap:Ethernet HWaddr 0A:58:0A:F4:40:33
+ inet addr:10.244.64.51 Bcast:0.0.0.0 Mask:255.255.255.0
+ UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
+ RX packets:6 errors:0 dropped:0 overruns:0 frame:0
+ TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
+ collisions:0 txqueuelen:0
+ RX bytes:252 (252.0 B) TX bytes:0 (0.0 B)
+
+lo Link encap:Local Loopback
+ inet addr:127.0.0.1 Mask:255.0.0.0
+ UP LOOPBACK RUNNING MTU:65536 Metric:1
+ RX packets:0 errors:0 dropped:0 overruns:0 frame:0
+ TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
+ collisions:0 txqueuelen:1000
+ RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
+
+net0 Link encap:Ethernet HWaddr 0A:00:00:00:00:3E
+ inet addr:172.16.34.3 Bcast:172.16.34.255 Mask:255.255.255.0
+ UP BROADCAST RUNNING MULTICAST MTU:1400 Metric:1
+ RX packets:29 errors:0 dropped:0 overruns:0 frame:0
+ TX packets:26 errors:0 dropped:0 overruns:0 carrier:0
+ collisions:0 txqueuelen:0
+ RX bytes:2394 (2.3 KiB) TX bytes:2268 (2.2 KiB)
+
+# kubectl exec -it pnw-original-direct-2-6bc54d98c4-vhxmk -- ifconfig
+eth0 Link encap:Ethernet HWaddr 0A:58:0A:F4:40:32
+ inet addr:10.244.64.50 Bcast:0.0.0.0 Mask:255.255.255.0
+ UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
+ RX packets:6 errors:0 dropped:0 overruns:0 frame:0
+ TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
+ collisions:0 txqueuelen:0
+ RX bytes:252 (252.0 B) TX bytes:0 (0.0 B)
+
+lo Link encap:Local Loopback
+ inet addr:127.0.0.1 Mask:255.0.0.0
+ UP LOOPBACK RUNNING MTU:65536 Metric:1
+ RX packets:0 errors:0 dropped:0 overruns:0 frame:0
+ TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
+ collisions:0 txqueuelen:1000
+ RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
+
+net0 Link encap:Ethernet HWaddr 0A:00:00:00:00:3F
+ inet addr:172.16.34.4 Bcast:172.16.34.255 Mask:255.255.255.0
+ UP BROADCAST RUNNING MULTICAST MTU:1400 Metric:1
+ RX packets:14 errors:0 dropped:0 overruns:0 frame:0
+ TX packets:10 errors:0 dropped:0 overruns:0 carrier:0
+ collisions:0 txqueuelen:0
+ RX bytes:1092 (1.0 KiB) TX bytes:924 (924.0 B)
+# kubectl exec -it pnw-original-direct-2-6bc54d98c4-vhxmk -- ping -I net0 172.16.34.3 -c 2
+PING 172.16.34.3 (172.16.34.3): 56 data bytes
+64 bytes from 172.16.34.3: seq=0 ttl=64 time=0.097 ms
+64 bytes from 172.16.34.3: seq=1 ttl=64 time=0.096 ms
+
+--- 172.16.34.3 ping statistics ---
+2 packets transmitted, 2 packets received, 0% packet loss
+round-trip min/avg/max = 0.096/0.096/0.097 ms
+```
+In VM2, ping the pod1 in the VM1
+$ ping -I eth0.101 172.16.34.2 -c 2
+```
+PING 172.16.34.2 (172.16.34.2) from 172.16.34.2 eth0.101: 56(84) bytes of data.
+64 bytes from 172.16.34.2: icmp_seq=1 ttl=64 time=0.057 ms
+64 bytes from 172.16.34.2: icmp_seq=2 ttl=64 time=0.065 ms
+
+--- 172.16.34.2 ping statistics ---
+2 packets transmitted, 2 received, 0% packet loss, time 1010ms
+rtt min/avg/max/mdev = 0.057/0.061/0.065/0.004 ms
+```
+## Direct provider networking between VMs
+![Direct provider network testing](images/direct-provider-networking.png)
+
+# Summary
+
+This is only the test scenario for development and also for verification purpose. Work in progress to make the end2end testing
+automatic.
diff --git a/example/images/direct-provider-networking.png b/example/images/direct-provider-networking.png
new file mode 100644
index 0000000..ffed71d
--- /dev/null
+++ b/example/images/direct-provider-networking.png
Binary files differ
diff --git a/example/images/vlan-tagging.png b/example/images/vlan-tagging.png
new file mode 100644
index 0000000..e6fe4dd
--- /dev/null
+++ b/example/images/vlan-tagging.png
Binary files differ
diff --git a/example/ovn4nfv-k8s-plugin-daemonset.yml b/example/ovn4nfv-k8s-plugin-daemonset.yml
new file mode 100644
index 0000000..307b950
--- /dev/null
+++ b/example/ovn4nfv-k8s-plugin-daemonset.yml
@@ -0,0 +1,555 @@
+
+---
+
+apiVersion: apiextensions.k8s.io/v1beta1
+kind: CustomResourceDefinition
+metadata:
+ name: networks.k8s.plugin.opnfv.org
+spec:
+ group: k8s.plugin.opnfv.org
+ names:
+ kind: Network
+ listKind: NetworkList
+ plural: networks
+ singular: network
+ scope: Namespaced
+ subresources:
+ status: {}
+ validation:
+ openAPIV3Schema:
+ properties:
+ apiVersion:
+ description: 'APIVersion defines the versioned schema of this representation
+ of an object. Servers should convert recognized schemas to the latest
+ internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#resources'
+ type: string
+ kind:
+ description: 'Kind is a string value representing the REST resource this
+ object represents. Servers may infer this from the endpoint the client
+ submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds'
+ type: string
+ metadata:
+ type: object
+ spec:
+ properties:
+ cniType:
+ description: 'INSERT ADDITIONAL SPEC FIELDS - desired state of cluster
+ Important: Run "operator-sdk generate k8s" to regenerate code after
+ modifying this file Add custom validation using kubebuilder tags:
+ https://book-v1.book.kubebuilder.io/beyond_basics/generating_crd.html'
+ type: string
+ dns:
+ properties:
+ domain:
+ type: string
+ nameservers:
+ items:
+ type: string
+ type: array
+ options:
+ items:
+ type: string
+ type: array
+ search:
+ items:
+ type: string
+ type: array
+ type: object
+ ipv4Subnets:
+ items:
+ properties:
+ excludeIps:
+ type: string
+ gateway:
+ type: string
+ name:
+ type: string
+ subnet:
+ type: string
+ required:
+ - name
+ - subnet
+ type: object
+ type: array
+ ipv6Subnets:
+ items:
+ properties:
+ excludeIps:
+ type: string
+ gateway:
+ type: string
+ name:
+ type: string
+ subnet:
+ type: string
+ required:
+ - name
+ - subnet
+ type: object
+ type: array
+ routes:
+ items:
+ properties:
+ dst:
+ type: string
+ gw:
+ type: string
+ required:
+ - dst
+ type: object
+ type: array
+ required:
+ - cniType
+ - ipv4Subnets
+ type: object
+ status:
+ properties:
+ state:
+ description: 'INSERT ADDITIONAL STATUS FIELD - define observed state
+ of cluster Important: Run "operator-sdk generate k8s" to regenerate
+ code after modifying this file Add custom validation using kubebuilder
+ tags: https://book-v1.book.kubebuilder.io/beyond_basics/generating_crd.html'
+ type: string
+ required:
+ - state
+ type: object
+ version: v1alpha1
+ versions:
+ - name: v1alpha1
+ served: true
+ storage: true
+
+
+---
+apiVersion: apiextensions.k8s.io/v1beta1
+kind: CustomResourceDefinition
+metadata:
+ name: providernetworks.k8s.plugin.opnfv.org
+spec:
+ group: k8s.plugin.opnfv.org
+ names:
+ kind: ProviderNetwork
+ listKind: ProviderNetworkList
+ plural: providernetworks
+ singular: providernetwork
+ scope: Namespaced
+ subresources:
+ status: {}
+ validation:
+ openAPIV3Schema:
+ description: ProviderNetwork is the Schema for the providernetworks API
+ properties:
+ apiVersion:
+ description: 'APIVersion defines the versioned schema of this representation
+ of an object. Servers should convert recognized schemas to the latest
+ internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
+ type: string
+ kind:
+ description: 'Kind is a string value representing the REST resource this
+ object represents. Servers may infer this from the endpoint the client
+ submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
+ type: string
+ metadata:
+ type: object
+ spec:
+ description: ProviderNetworkSpec defines the desired state of ProviderNetwork
+ properties:
+ cniType:
+ description: 'INSERT ADDITIONAL SPEC FIELDS - desired state of cluster
+ Important: Run "operator-sdk generate k8s" to regenerate code after
+ modifying this file Add custom validation using kubebuilder tags:
+ https://book-v1.book.kubebuilder.io/beyond_basics/generating_crd.html'
+ type: string
+ direct:
+ properties:
+ directNodeSelector:
+ type: string
+ nodeLabelList:
+ items:
+ type: string
+ type: array
+ providerInterfaceName:
+ type: string
+ required:
+ - directNodeSelector
+ - providerInterfaceName
+ type: object
+ dns:
+ properties:
+ domain:
+ type: string
+ nameservers:
+ items:
+ type: string
+ type: array
+ options:
+ items:
+ type: string
+ type: array
+ search:
+ items:
+ type: string
+ type: array
+ type: object
+ ipv4Subnets:
+ items:
+ properties:
+ excludeIps:
+ type: string
+ gateway:
+ type: string
+ name:
+ type: string
+ subnet:
+ type: string
+ required:
+ - name
+ - subnet
+ type: object
+ type: array
+ ipv6Subnets:
+ items:
+ properties:
+ excludeIps:
+ type: string
+ gateway:
+ type: string
+ name:
+ type: string
+ subnet:
+ type: string
+ required:
+ - name
+ - subnet
+ type: object
+ type: array
+ providerNetType:
+ type: string
+ routes:
+ items:
+ properties:
+ dst:
+ type: string
+ gw:
+ type: string
+ required:
+ - dst
+ type: object
+ type: array
+ vlan:
+ properties:
+ logicalInterfaceName:
+ type: string
+ nodeLabelList:
+ items:
+ type: string
+ type: array
+ providerInterfaceName:
+ type: string
+ vlanId:
+ type: string
+ vlanNodeSelector:
+ type: string
+ required:
+ - providerInterfaceName
+ - vlanId
+ - vlanNodeSelector
+ type: object
+ required:
+ - cniType
+ - ipv4Subnets
+ - providerNetType
+ type: object
+ status:
+ description: ProviderNetworkStatus defines the observed state of ProviderNetwork
+ properties:
+ state:
+ description: 'INSERT ADDITIONAL STATUS FIELD - define observed state
+ of cluster Important: Run "operator-sdk generate k8s" to regenerate
+ code after modifying this file Add custom validation using kubebuilder
+ tags: https://book-v1.book.kubebuilder.io/beyond_basics/generating_crd.html'
+ type: string
+ required:
+ - state
+ type: object
+ type: object
+ version: v1alpha1
+ versions:
+ - name: v1alpha1
+ served: true
+ storage: true
+---
+
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+ name: k8s-nfn-sa
+ namespace: operator
+
+---
+
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+ creationTimestamp: null
+ name: k8s-nfn-cr
+rules:
+- apiGroups:
+ - ""
+ resources:
+ - pods
+ - services
+ - endpoints
+ - persistentvolumeclaims
+ - events
+ - configmaps
+ - secrets
+ - nodes
+ verbs:
+ - '*'
+- apiGroups:
+ - apps
+ resources:
+ - deployments
+ - daemonsets
+ - replicasets
+ - statefulsets
+ verbs:
+ - '*'
+- apiGroups:
+ - monitoring.coreos.com
+ resources:
+ - servicemonitors
+ verbs:
+ - get
+ - create
+- apiGroups:
+ - apps
+ resourceNames:
+ - nfn-operator
+ resources:
+ - deployments/finalizers
+ verbs:
+ - update
+- apiGroups:
+ - k8s.plugin.opnfv.org
+ resources:
+ - '*'
+ - providernetworks
+ verbs:
+ - '*'
+
+---
+
+kind: ClusterRoleBinding
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: k8s-nfn-crb
+subjects:
+- kind: Group
+ name: system:serviceaccounts
+ apiGroup: rbac.authorization.k8s.io
+roleRef:
+ kind: ClusterRole
+ name: k8s-nfn-cr
+ apiGroup: rbac.authorization.k8s.io
+
+
+---
+
+apiVersion: v1
+kind: Service
+metadata:
+ name: nfn-operator
+ namespace: operator
+spec:
+ type: NodePort
+ ports:
+ - port: 50000
+ protocol: TCP
+ targetPort: 50000
+ selector:
+ name: nfn-operator
+
+
+---
+
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: nfn-operator
+ namespace: operator
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ name: nfn-operator
+ template:
+ metadata:
+ labels:
+ name: nfn-operator
+ spec:
+ hostNetwork: true
+ affinity:
+ nodeAffinity:
+ requiredDuringSchedulingIgnoredDuringExecution:
+ nodeSelectorTerms:
+ - matchExpressions:
+ - key: nfnType
+ operator: In
+ values:
+ - operator
+ tolerations:
+ - key: "node-role.kubernetes.io/master"
+ effect: "NoSchedule"
+ operator: "Exists"
+ serviceAccountName: k8s-nfn-sa
+ containers:
+ - name: nfn-operator
+ image: integratedcloudnative/ovn4nfv-k8s-plugin:master
+ command: ["/usr/local/bin/entrypoint", "operator"]
+ imagePullPolicy: IfNotPresent
+ ports:
+ - containerPort: 50000
+ protocol: TCP
+ env:
+ - name: HOST_IP
+ valueFrom:
+ fieldRef:
+ fieldPath: status.hostIP
+ - name: POD_NAME
+ valueFrom:
+ fieldRef:
+ fieldPath: metadata.name
+ - name: OPERATOR_NAME
+ value: "nfn-operator"
+
+---
+kind: ConfigMap
+apiVersion: v1
+metadata:
+ name: ovn4nfv-cni-config
+ namespace: operator
+ labels:
+ app: ovn4nfv
+data:
+ ovn4nfv_k8s.conf: |
+ [logging]
+ loglevel=5
+ logfile=/var/log/openvswitch/ovn4k8s.log
+
+ [cni]
+ conf-dir=/etc/cni/net.d
+ plugin=ovn4nfvk8s-cni
+
+ [kubernetes]
+ kubeconfig=/etc/kubernetes/admin.conf
+
+---
+apiVersion: extensions/v1beta1
+kind: DaemonSet
+metadata:
+ name: ovn4nfv-cni
+ namespace: operator
+ labels:
+ app: ovn4nfv
+spec:
+ updateStrategy:
+ type: RollingUpdate
+ template:
+ metadata:
+ labels:
+ app: ovn4nfv
+ spec:
+ hostNetwork: true
+ nodeSelector:
+ beta.kubernetes.io/arch: amd64
+ tolerations:
+ - operator: Exists
+ effect: NoSchedule
+ containers:
+ - name: ovn4nfv
+ image: integratedcloudnative/ovn4nfv-k8s-plugin:master
+ command: ["/usr/local/bin/entrypoint", "cni"]
+ resources:
+ requests:
+ cpu: "100m"
+ memory: "50Mi"
+ limits:
+ cpu: "100m"
+ memory: "50Mi"
+ securityContext:
+ privileged: true
+ volumeMounts:
+ - name: cnibin
+ mountPath: /host/opt/cni/bin
+ - name: cniconf
+ mountPath: /host/etc/openvswitch
+ - name: ovn4nfv-cfg
+ mountPath: /tmp/ovn4nfv-conf
+ volumes:
+ - name: cnibin
+ hostPath:
+ path: /opt/cni/bin
+ - name: cniconf
+ hostPath:
+ path: /etc/openvswitch
+ - name: ovn4nfv-cfg
+ configMap:
+ name: ovn4nfv-cni-config
+ items:
+ - key: ovn4nfv_k8s.conf
+ path: ovn4nfv_k8s.conf
+
+---
+apiVersion: extensions/v1beta1
+kind: DaemonSet
+metadata:
+ name: nfn-agent
+ namespace: operator
+ labels:
+ app: nfn-agent
+spec:
+ updateStrategy:
+ type: RollingUpdate
+ template:
+ metadata:
+ labels:
+ app: nfn-agent
+ spec:
+ hostNetwork: true
+ nodeSelector:
+ beta.kubernetes.io/arch: amd64
+ tolerations:
+ - operator: Exists
+ effect: NoSchedule
+ containers:
+ - name: nfn-agent
+ image: integratedcloudnative/ovn4nfv-k8s-plugin:master
+ command: ["/usr/local/bin/entrypoint", "agent"]
+ resources:
+ requests:
+ cpu: "100m"
+ memory: "50Mi"
+ limits:
+ cpu: "100m"
+ memory: "50Mi"
+ env:
+ - name: NFN_NODE_NAME
+ valueFrom:
+ fieldRef:
+ fieldPath: spec.nodeName
+ securityContext:
+ privileged: true
+ volumeMounts:
+ - mountPath: /run/openvswitch
+ name: host-run-ovs
+ - mountPath: /var/run/openvswitch
+ name: host-var-run-ovs
+ volumes:
+ - name: host-run-ovs
+ hostPath:
+ path: /run/openvswitch
+ - name: host-var-run-ovs
+ hostPath:
+ path: /var/run/openvswitch
diff --git a/example/ovn4nfv_direct_pn.yml b/example/ovn4nfv_direct_pn.yml
new file mode 100644
index 0000000..ab93c15
--- /dev/null
+++ b/example/ovn4nfv_direct_pn.yml
@@ -0,0 +1,79 @@
+apiVersion: k8s.plugin.opnfv.org/v1alpha1
+kind: ProviderNetwork
+metadata:
+ name: directpnetwork
+spec:
+ cniType: ovn4nfv
+ ipv4Subnets:
+ - subnet: 172.16.34.0/24
+ name: subnet2
+ gateway: 172.16.34.1/24
+ excludeIps: 172.16.34.2 172.16.34.5..172.16.34.10
+ providerNetType: DIRECT
+ direct:
+ providerInterfaceName: eth0.101
+ directNodeSelector: specific
+ nodeLabelList:
+ - kubernetes.io/hostname=ubuntu18
+
+---
+
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: pnw-original-direct-1
+ labels:
+ app: pnw-original-direct-1
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: pnw-original-direct-1
+ template:
+ metadata:
+ labels:
+ app: pnw-original-direct-1
+ annotations:
+ k8s.v1.cni.cncf.io/networks: '[{ "name": "ovn-networkobj"}]'
+ k8s.plugin.opnfv.org/nfn-network: '{ "type": "ovn4nfv", "interface": [{ "name": "directpnetwork", "interface": "net0" }]}'
+
+ spec:
+ containers:
+ - name: pnw-original-direct-1
+ image: "busybox"
+ imagePullPolicy: Always
+ stdin: true
+ tty: true
+ securityContext:
+ privileged: true
+
+---
+
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: pnw-original-direct-2
+ labels:
+ app: pnw-original-direct-2
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: pnw-original-direct-2
+ template:
+ metadata:
+ labels:
+ app: pnw-original-direct-2
+ annotations:
+ k8s.v1.cni.cncf.io/networks: '[{ "name": "ovn-networkobj"}]'
+ k8s.plugin.opnfv.org/nfn-network: '{ "type": "ovn4nfv", "interface": [{ "name": "directpnetwork", "interface": "net0" }]}'
+
+ spec:
+ containers:
+ - name: pnw-original-direct-2
+ image: "busybox"
+ imagePullPolicy: Always
+ stdin: true
+ tty: true
+ securityContext:
+ privileged: true
diff --git a/example/ovn4nfv_vlan_pn.yml b/example/ovn4nfv_vlan_pn.yml
new file mode 100644
index 0000000..081727d
--- /dev/null
+++ b/example/ovn4nfv_vlan_pn.yml
@@ -0,0 +1,82 @@
+apiVersion: k8s.plugin.opnfv.org/v1alpha1
+kind: ProviderNetwork
+metadata:
+ name: pnetwork
+spec:
+ cniType: ovn4nfv
+ ipv4Subnets:
+ - subnet: 172.16.33.0/24
+ name: subnet1
+ gateway: 172.16.33.1/24
+ excludeIps: 172.16.33.2 172.16.33.5..172.16.33.10
+ providerNetType: VLAN
+ vlan:
+ vlanId: "100"
+ providerInterfaceName: eth0
+ logicalInterfaceName: eth0.100
+ vlanNodeSelector: specific
+ nodeLabelList:
+ - kubernetes.io/hostname=ubuntu18
+
+---
+
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: pnw-original-vlan-1
+ labels:
+ app: pnw-original-vlan-1
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: pnw-original-vlan-1
+ template:
+ metadata:
+ labels:
+ app: pnw-original-vlan-1
+ annotations:
+ k8s.v1.cni.cncf.io/networks: '[{ "name": "ovn-networkobj"}]'
+ k8s.plugin.opnfv.org/nfn-network: '{ "type": "ovn4nfv", "interface": [{ "name": "pnetwork", "interface": "net0" }]}'
+
+ spec:
+ containers:
+ - name: pnw-original-vlan-1
+ image: "busybox"
+ imagePullPolicy: Always
+ stdin: true
+ tty: true
+ securityContext:
+ privileged: true
+
+---
+
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: pnw-original-vlan-2
+ labels:
+ app: pnw-original-vlan-2
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: pnw-original-vlan-2
+ template:
+ metadata:
+ labels:
+ app: pnw-original-vlan-2
+ annotations:
+ k8s.v1.cni.cncf.io/networks: '[{ "name": "ovn-networkobj"}]'
+ k8s.plugin.opnfv.org/nfn-network: '{ "type": "ovn4nfv", "interface": [{ "name": "pnetwork", "interface": "net0" }]}'
+
+ spec:
+ containers:
+ - name: pnw-original-vlan-2
+ image: "busybox"
+ imagePullPolicy: Always
+ stdin: true
+ tty: true
+ securityContext:
+ privileged: true
+