aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorKuralamudhan Ramakrishnan <kuralamudhan.ramakrishnan@intel.com>2020-09-18 06:22:16 -0700
committerKuralamudhan Ramakrishnan <kuralamudhan.ramakrishnan@intel.com>2020-09-18 12:04:41 -0700
commitf61f4efd682848679d689cdbaf0bea31f7b3be2d (patch)
tree6e9b41503375fc380d6b23d63f0c1435177fcd62
parentabc86b0782d8d1ea41b5b5dd04b4c6c8755e7210 (diff)
update the documentations
Signed-off-by: Kuralamudhan Ramakrishnan <kuralamudhan.ramakrishnan@intel.com> Change-Id: Iee2ed22a5002bffbd629188acf1ffcc60802c488
-rw-r--r--README.md88
-rw-r--r--README.rst326
-rw-r--r--doc/development.md0
-rw-r--r--doc/how-to-use.md0
-rw-r--r--doc/quickstart.md0
-rw-r--r--example/README.md398
-rw-r--r--images/direct-provider-networking.png (renamed from example/images/direct-provider-networking.png)bin31119 -> 31119 bytes
-rw-r--r--images/ovn4nfv-k8s-arch-block.pngbin0 -> 102610 bytes
-rw-r--r--images/ovn4nfv-network-traffic.pngbin0 -> 88346 bytes
-rw-r--r--images/sfc-with-sdewan.pngbin0 -> 98211 bytes
-rw-r--r--images/vlan-tagging.png (renamed from example/images/vlan-tagging.png)bin31521 -> 31521 bytes
11 files changed, 287 insertions, 525 deletions
diff --git a/README.md b/README.md
new file mode 100644
index 0000000..148c884
--- /dev/null
+++ b/README.md
@@ -0,0 +1,88 @@
+# OVN4NFV K8s Plugin - Network controller
+This plugin addresses the below requirements, for networking
+workloads as well typical application workloads
+- Multi ovn network support
+- Multi-interface ovn support
+- Multi-IP address support
+- Dynamic creation of virtual networks
+- Route management across virtual networks and external networks
+- Service Function chaining(SFC) support in Kubernetes
+- SRIOV Overlay networking (WIP)
+- OVN load balancer (WIP)
+
+## How it works
+
+OVN4NFV consist of 4 major components
+- OVN control plane
+- OVN controller
+- Network Function Network(NFN) k8s operator/controller
+- Network Function Network(NFN) agent
+
+OVN control plane and OVN controller take care of OVN configuration and installation in each node in Kubernetes. NFN operator runs in the Kubernetes master and NFN agent run as a daemonset in each node.
+
+### OVN4NFV architecture blocks
+![ovn4nfv k8s arc block](./images/ovn4nfv-k8s-arch-block.png)
+
+#### NFN Operator
+* Exposes virtual, provider, chaining CRDs to external world
+* Programs OVN to create L2 switches
+* Watches for PODs being coming up
+ * Assigns IP addresses for every network of the deployment
+ * Looks for replicas and auto create routes for chaining to work
+ * Create LBs for distributing the load across CNF replicas
+#### NFN Agent
+* Performs CNI operations.
+* Configures VLAN and Routes in Linux kernel (in case of routes, it could do it in both root and network namespaces)
+* Communicates with OVSDB to inform of provider interfaces. (creates ovs bridge and creates external-ids:ovn-bridge-mappings)
+
+### Networks traffice between pods
+![ovn4nfv network traffic](./images/ovn4nfv-network-traffic.png)
+
+ovn4nfv-default-nw is the default logic switch create for the default networking in kubernetes pod network for cidr 10.244.64.0/18. Both node and pod in the kubernetes cluster share the same ipam information.
+
+### Service Function Chaining Demo
+![sfc-with-sdewan](./images/sfc-with-sdewan.png)
+
+In general production env, we have multiple Network function such as SLB, NGFW and SDWAN CNFs.
+
+There are general 3 sfc flows are there:
+* Packets from the pod to reach internet: Ingress (L7 LB) -> SLB -> NGFW -> SDWAN CNF -> External router -> Internet
+* Packets from the pod to internal server in the corp network: Ingress (L7 LB) -> SLB -> M3 server
+* Packets from the internal server M3 to reach internet: M3 -> SLB -> NGFW -> SDWAN CNF -> External router -> Internet
+
+OVN4NFV SFC currently support all 3 follows. The detailed demo is include [demo/sfc-setup/README.md](./demo/sfc-setup/README.md)
+
+# Quickstart Installation Guide
+### kubeadm
+
+Install the [docker](https://docs.docker.com/engine/install/ubuntu/) in the Kubernetes cluster node.
+Follow the steps in [create cluster kubeadm](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/) to create kubernetes cluster in master
+In the master node run the `kubeadm init` as below. The ovn4nfv uses pod network cidr `10.244.64.0/18`
+```
+ $ kubeadm init --kubernetes-version=1.19.0 --pod-network-cidr=10.244.64.0/18 --apiserver-advertise-address=<master_eth0_ip_address>
+```
+Deploy the ovn4nfv Pod network to the cluster.
+```
+ $ kubectl apply -f deploy/ovn-daemonset.yaml
+ $ kubectl apply -f deploy/ovn4nfv-k8s-plugin.yaml
+```
+Join worker node by running the `kubeadm join` on each node as root as mentioned in [create cluster kubeadm](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/)
+
+### kubespray
+
+Kubespray support the ovn4nfv as the network plugin- please follow the steps in [kubernetes-sigs/kubespray//docs/ovn4nfv.md](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/ovn4nfv.md)
+
+## Comprehensive Documentation
+
+- [How to use](doc/how-to-use.md)
+- [Configuration](doc/configuration.md)
+- [Development](doc/development.md)
+
+## Contact Us
+
+For any questions about ovn4nfv k8s , feel free to ask a question in #general in the [ICN slack](https://akraino-icn-admin.herokuapp.com/), or open up a https://jira.opnfv.org/issues/.
+
+* Srinivasa Addepalli <srinivasa.r.addepalli@intel.com>
+* Ritu Sood <ritu.sood@intel.com>
+* Kuralamudhan Ramakrishnan <kuralamudhan.ramakrishnan@intel.com>
+
diff --git a/README.rst b/README.rst
deleted file mode 100644
index 6148dea..0000000
--- a/README.rst
+++ /dev/null
@@ -1,326 +0,0 @@
-.. Copyright 2018 Intel Corporation.
- Licensed under the Apache License, Version 2.0 (the "License");
- you may not use this file except in compliance with the License.
- You may obtain a copy of the License at
- http://www.apache.org/licenses/LICENSE-2.0
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
-
-=================
-OVN4NFVK8s Plugin/nfn-operator (Network Function Networking operator)
-=================
-
-Problem statement
------------------
-
-Networking applications are of three types - Management applications,
-Control plane applications and data plane applications. Management
-and control plane applications are similar to Enterprise applications,
-but data plane applications different in following aspects:
-
-- Multiple virtual network interfaces
-- Multiple IP addresses
-- SRIOV networking support
-- Programmable virtual switch (for service function chaining, to tap
- the traffic for visibility etc..)
-
-Kubernetes (Simply K8S) is the most popular container orchestrator.
-K8S is supported by GCE, AZURE and AWS and will be supported by
-Akraino Edge stack that enable edge clouds.
-
-K8S has being enhanced to support VM workload types, this helps
-cloud providers that need to migrate legacy workloads to microservices
-architecture. Cloud providers may continue to support VM workload
-types for security reasons and hence there is need for VIM that
-support both VMs and containers. Since same K8S instance can
-orchestrate both VM and container workload types, same compute nodes
-can be leveraged for both VMs and containers. Telco and CSPs are
-seeing similar need to deploy networking applications as containers.
-
-Since, both VMs and container workloads are used for networking
-applications, there would be need for
-
-- Sharing the networks across VMs and containers.
-- Sharing the volumes across VMs and containers.
-
-**Network Function Virtualization Requirements**
-
-NFV workloads can be,
-
-- Management plane workloads
-- Control plane work loads
-- User plane (data plane workloads)
-- User plane workloads normally have
-- Multiple interfaces, Multiple subnets, Multiple virtual networks
-- NFV workloads typically have its own management network.
-- Some data plane workloads require SR-IOV NIC support for data
- interfaces and virtual NIC for other interfaces (for performance
- reasons)
-- Need for multiple CNIs.
-- NFV workloads require dynamic creation of virtual networks. Dynamic
- configuration of subnets.
-
-ovn4nfvk8s Plugin
------------------
-
-This plugin addresses the below requirements, for networking
-workloads as well typical application workloads
-- Multi-interface support
-- Multi-IP address support
-- Dynamic creation of virtual networks
-- Co-existing with SRIOV and other CNIs.
-- Route management across virtual networks and external networks
-
-**OVN Background**
-
-OVN, the Open Virtual Network, is a system to support virtual network
-abstraction. OVN complements the existing capabilities of OVS to add
-native support for virtual network abstractions, such as virtual L2
-and L3 overlays and security groups. Services such as DHCP are also
-desirable features. Just like OVS, OVN’s design goal is to have a
-production quality implementation that can operate at significant
-scale.
-
-**OVN4NFVK8s Plugin/ NFN-Operator development**
-
-ovn-kubernetes_ plugin is part of OVN project which provides OVN
-integration with Kubernetes but doesn't address the requirements
-as given above. To meet those requirements like multiple interfaces,
-IPs, dynamic creation of virtual networks, etc., OVN4NFVK8s plugin/
-nfn-operator is created. It assumes that it will be used in
-conjuction with Multus_or other similar CNI which allows for the
-co-existance of multipleCNI plugins in runtime. This plugin assumes
-that the first interface in a Pod is provided by some other Plugin/CNI
-like Flannel or even OVN-Kubernetes. It is only responsible to add
-multiple interfacesbased on the Pod annotations. The code is based on
-ovn-kubernetes_.
-
-NFN-Operator has following functionalities:
-1) It watches pods for annotations (see below)
-2) It is a CRD Controller for dynamic networks, provider networks and
- dynamic route creation.
-3) It is a gRPC server for nfn-agent running on all nodes in the cluster
-
-nfn-operator uses operator-sdk and controller runtime for watching for
-pods and also other CR's. For creating dynamic logical networks Network
-CRD controller creates OVN logical switch as defined in CR. For provider
-network creation Provider Network CRD controller works with nfn-agent
-running on nodes to create provider network. nfn-operator communicates
-with nfn agent using gRPC. Currently only VLAN based provider networks
-are supported.
-
-.. note::
-
- This plugin is currently tested to work with Multus and Flannel
- providing the first network interface.
-
-To meet the requirement of multiple interfaces and IP's per pod,
-a Pod annotation like below is required when working with Multus:
-
-
-.. code-block:: yaml
-
-
- annotations:
- k8s.v1.cni.cncf.io/networks: '[{ "name": "ovn-networkobj"}]'
- k8s.plugin.opnfv.org/nfn-network: '{ "type": "ovn4nfv", "interface": [
- { "name": <name of OVN Logical Switch>, "interfaceRequest": "eth1" },
- { "name": <name of OVN Logical Switch>, "interfaceRequest": "eth2" }
- ]}'
-
-Based on these annotations watcher service in OVN4NFVK8s plugin/
-nfn-operator assumes logical switch is already present. Dynamic IP
-addresses are assigned (static IP's also supported) and annotations
-are updated.
-
-When the Pod is initialized on a node, OVN4NFVK8s CNI creates multiple
-interfaces and assigns IP addresses for the pod based on the annotations.
-
-**Multus Configuration**
-Multus CRD definition for OVN:
-
-.. code-block:: yaml
-
- apiVersion: "k8s.cni.cncf.io/v1"
- kind: NetworkAttachmentDefinition
- metadata:
- name: ovn-networkobj
- spec:
- config: '{
- "cniVersion": "0.3.1",
- "name": "ovn4nfv-k8s-plugin",
- "type": "ovn4nfvk8s-cni"
- }'
-
-Please refer to Multus_ for details about how this configuration is used
-
-CNI configuration file for Multus with Flannel:
-
-.. code-block:: yaml
-
- {
- "type": "multus",
- "name": "multus-cni",
- "cniVersion": "0.3.1",
- "kubeconfig": "/etc/kubernetes/admin.conf",
- "delegates": [
- {
- "type": "flannel",
- "cniVersion": "0.3.1",
- "masterplugin": true,
- "delegate": {
- "isDefaultGateway": false
- }
- }
- ]
- }
-
-Refer Kubernetes_ documentation for the order in which CNI configurations
-are applied.
-
-
-**Build**
-
-For building the project:
-
-.. code-block:: bash
-
- cd ovn4nfv-k8s-plugin
- make
-
-
-This will output two files nfn-operator, nfn-agent and ovn4nfvk8s-cni which are the plugin/
- operator, gRPC client and CNI binaries respectively.
-
-ovn4nfvk8s-cni requires some configuration at start up.
-
-Example configuration file (default location/etc/openvswitch/ovn4nfv_k8s.conf)
-
-.. code-block:: yaml
-
- [logging]
- loglevel=5
- logfile=/var/log/openvswitch/ovn4k8s.log
-
- [cni]
- conf-dir=/etc/cni/net.d
- plugin=ovn4nfvk8s-cni
-
- [kubernetes]
- kubeconfig=/etc/kubernetes/admin.conf
-
-
-
-**CRD Controllers**
-
-
-nfn-operator includes controllers for 3 types of CRDs:
-
-1) Network CRD - To create logical networks.
-
-2) Provider Network CRD - To Create Provider networks. This works along with nfn-agent
- to create provider networks on nodes in cluster as needed.
-
-3) Chaining operator - To provision routes in Pods as per CR definition.
-
-
-
-**Network CR Example**
-
-
-.. code-block:: yaml
-
- apiVersion: k8s.plugin.opnfv.org/v1alpha1
- kind: Network
- metadata:
- name: example-network
- spec:
- # Add fields here
- cniType: ovn4nfv
- ipv4Subnets:
- - subnet: 172.16.44.0/24
- name: subnet1
- gateway: 172.16.44.1/24
- excludeIps: 172.16.44.2 172.16.44.5..172.16.44.10
-
-
-
-**Provider Network CR Example**
-
-
-.. code-block:: yaml
-
- apiVersion: k8s.plugin.opnfv.org/v1alpha1
- kind: ProviderNetwork
- metadata:
- name: pnetwork
- spec:
- cniType: ovn4nfv
- ipv4Subnets:
- - subnet: 172.16.33.0/24
- name: subnet1
- excludeIps: 172.16.33.2 172.16.33.5..172.16.33.10
- providerNetType: VLAN
- vlan:
- vlanId: "100"
- providerInterfaceName: eth1
- logicalInterfaceName: eth1.100
- vlanNodeSelector: specific
- nodeLabelList:
- - kubernetes.io/hostname=testnode1
-
-**Chaining CR Example**
-
-TODO
-
-
-**Figure**
-
-
-.. code-block:: raw
-
- +-----------------+
- | |
- | | Program OVN Switch
- |ovn4nfvk8s Plugin| +------------------+
- | +--------------------->| |
- | | | OVN Switch |
- | | | |
- | | +------------------+
- +----+----------+-+
- ^ |
- | |
- |On Event |Annotate Pod
- | |
- | v
- +----+--------------+ +------------------+ +-----------+
- | | | | | Pod |
- | Kube API +--------> Kube Scheduler +---------->| |
- | | | | +--------+--+
- | | +--------+---------+ |
- +-------------------+ | |
- | |
- | |Assign IP & MAC
- +--------v-----------+ |
- | | |
- | ovn4nfvk8s-cni | |
- | +------------------+
- +--------------------+
-
-
-
-
-**References**
-
-.. _ovn-kubernetes: https://github.com/openvswitch/ovn-kubernetes
-.. _Multus: https://github.com/intel/multus-cni
-.. _Kubernetes: https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/
-
-**Authors/Contributors**
-
-Addepalli, Srinivasa R <srinivasa.r.addepalli@intel.com>
-Sood, Ritu <ritu.sood@intel.com>
-
diff --git a/doc/development.md b/doc/development.md
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/doc/development.md
diff --git a/doc/how-to-use.md b/doc/how-to-use.md
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/doc/how-to-use.md
diff --git a/doc/quickstart.md b/doc/quickstart.md
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/doc/quickstart.md
diff --git a/example/README.md b/example/README.md
index ff52bad..2d3ad4c 100644
--- a/example/README.md
+++ b/example/README.md
@@ -1,199 +1,199 @@
-# Example Setup and Testing
-
-In this `./example` folder, OVN4NFV-plugin daemonset yaml file, VLAN and direct Provider networking testing scenarios and required sample
-configuration file.
-
-# Quick start
-
-## Creating sandbox environment
-
-Create 2 VMs in your setup. The recommended way of creating the sandbox is through KUD. Please follow the all-in-one setup in KUD. This
-will create two VMs and provide the required sandbox.
-
-## VLAN Tagging Provider network testing
-
-The following setup have 2 VMs with one VM having Kubernetes setup with OVN4NFVk8s plugin and another VM act as provider networking to do
-testing.
-
-Run the following yaml file to test teh vlan tagging provider networking. User required to change the `providerInterfaceName` and
-`nodeLabelList` in the `ovn4nfv_vlan_pn.yml`
-
-```
-kubectl apply -f ovn4nfv_vlan_pn.yml
-```
-This create Vlan tagging interface eth0.100 in VM1 and two pods for the deployment `pnw-original-vlan-1` and `pnw-original-vlan-2` in VM.
-Test the interface details and inter network communication between `net0` interfaces
-```
-# kubectl exec -it pnw-original-vlan-1-6c67574cd7-mv57g -- ifconfig
-eth0 Link encap:Ethernet HWaddr 0A:58:0A:F4:40:30
- inet addr:10.244.64.48 Bcast:0.0.0.0 Mask:255.255.255.0
- UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
- RX packets:11 errors:0 dropped:0 overruns:0 frame:0
- TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
- collisions:0 txqueuelen:0
- RX bytes:462 (462.0 B) TX bytes:0 (0.0 B)
-
-lo Link encap:Local Loopback
- inet addr:127.0.0.1 Mask:255.0.0.0
- UP LOOPBACK RUNNING MTU:65536 Metric:1
- RX packets:0 errors:0 dropped:0 overruns:0 frame:0
- TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
- collisions:0 txqueuelen:1000
- RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
-
-net0 Link encap:Ethernet HWaddr 0A:00:00:00:00:3C
- inet addr:172.16.33.3 Bcast:172.16.33.255 Mask:255.255.255.0
- UP BROADCAST RUNNING MULTICAST MTU:1400 Metric:1
- RX packets:10 errors:0 dropped:0 overruns:0 frame:0
- TX packets:9 errors:0 dropped:0 overruns:0 carrier:0
- collisions:0 txqueuelen:0
- RX bytes:868 (868.0 B) TX bytes:826 (826.0 B)
-# kubectl exec -it pnw-original-vlan-2-5bd9ffbf5c-4gcgq -- ifconfig
-eth0 Link encap:Ethernet HWaddr 0A:58:0A:F4:40:31
- inet addr:10.244.64.49 Bcast:0.0.0.0 Mask:255.255.255.0
- UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
- RX packets:11 errors:0 dropped:0 overruns:0 frame:0
- TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
- collisions:0 txqueuelen:0
- RX bytes:462 (462.0 B) TX bytes:0 (0.0 B)
-
-lo Link encap:Local Loopback
- inet addr:127.0.0.1 Mask:255.0.0.0
- UP LOOPBACK RUNNING MTU:65536 Metric:1
- RX packets:0 errors:0 dropped:0 overruns:0 frame:0
- TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
- collisions:0 txqueuelen:1000
- RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
-
-net0 Link encap:Ethernet HWaddr 0A:00:00:00:00:3D
- inet addr:172.16.33.4 Bcast:172.16.33.255 Mask:255.255.255.0
- UP BROADCAST RUNNING MULTICAST MTU:1400 Metric:1
- RX packets:25 errors:0 dropped:0 overruns:0 frame:0
- TX packets:25 errors:0 dropped:0 overruns:0 carrier:0
- collisions:0 txqueuelen:0
- RX bytes:2282 (2.2 KiB) TX bytes:2282 (2.2 KiB)
-```
-Test the ping operation between the vlan interfaces
-```
-# kubectl exec -it pnw-original-vlan-2-5bd9ffbf5c-4gcgq -- ping -I net0 172.16.33.3 -c 2
-PING 172.16.33.3 (172.16.33.3): 56 data bytes
-64 bytes from 172.16.33.3: seq=0 ttl=64 time=0.092 ms
-64 bytes from 172.16.33.3: seq=1 ttl=64 time=0.105 ms
-
---- 172.16.33.3 ping statistics ---
-2 packets transmitted, 2 packets received, 0% packet loss
-round-trip min/avg/max = 0.092/0.098/0.105 ms
-```
-In VM2 create a Vlan tagging for eth0 as eth0.100 and configure the IP address as
-```
-# ifconfig eth0.100
-eth0.100: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
- inet 172.16.33.2 netmask 255.255.255.0 broadcast 172.16.33.255
- ether 52:54:00:f4:ee:d9 txqueuelen 1000 (Ethernet)
- RX packets 111 bytes 8092 (8.0 KB)
- RX errors 0 dropped 0 overruns 0 frame 0
- TX packets 149 bytes 12698 (12.6 KB)
- TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
-```
-Pinging from VM2 through eth0.100 to pod 1 in VM1 should be successfull to test the VLAN tagging
-```
-# ping -I eth0.100 172.16.33.3 -c 2
-PING 172.16.33.3 (172.16.33.3) from 172.16.33.2 eth0.100: 56(84) bytes of data.
-64 bytes from 172.16.33.3: icmp_seq=1 ttl=64 time=0.382 ms
-64 bytes from 172.16.33.3: icmp_seq=2 ttl=64 time=0.347 ms
-
---- 172.16.33.3 ping statistics ---
-2 packets transmitted, 2 received, 0% packet loss, time 1009ms
-rtt min/avg/max/mdev = 0.347/0.364/0.382/0.025 ms
-```
-## VLAN Tagging between VMs
-![vlan tagging testing](images/vlan-tagging.png)
-
-# Direct Provider network testing
-
-The main difference between Vlan tagging and Direct provider networking is that VLAN logical interface is created and then ports are
-attached to it. In order to validate the direct provider networking connectivity, we create VLAN tagging between VM1 & VM2 and test the
-connectivity as follow.
-
-Create VLAN tagging interface eth0.101 in VM1 and VM2. Just add `providerInterfaceName: eth0.101' in Direct provider network CR.
-```
-# kubectl apply -f ovn4nfv_direct_pn.yml
-```
-Check the inter connection between direct provider network pods as follow
-```
-# kubectl exec -it pnw-original-direct-1-85f5b45fdd-qq6xc -- ifconfig
-eth0 Link encap:Ethernet HWaddr 0A:58:0A:F4:40:33
- inet addr:10.244.64.51 Bcast:0.0.0.0 Mask:255.255.255.0
- UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
- RX packets:6 errors:0 dropped:0 overruns:0 frame:0
- TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
- collisions:0 txqueuelen:0
- RX bytes:252 (252.0 B) TX bytes:0 (0.0 B)
-
-lo Link encap:Local Loopback
- inet addr:127.0.0.1 Mask:255.0.0.0
- UP LOOPBACK RUNNING MTU:65536 Metric:1
- RX packets:0 errors:0 dropped:0 overruns:0 frame:0
- TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
- collisions:0 txqueuelen:1000
- RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
-
-net0 Link encap:Ethernet HWaddr 0A:00:00:00:00:3E
- inet addr:172.16.34.3 Bcast:172.16.34.255 Mask:255.255.255.0
- UP BROADCAST RUNNING MULTICAST MTU:1400 Metric:1
- RX packets:29 errors:0 dropped:0 overruns:0 frame:0
- TX packets:26 errors:0 dropped:0 overruns:0 carrier:0
- collisions:0 txqueuelen:0
- RX bytes:2394 (2.3 KiB) TX bytes:2268 (2.2 KiB)
-
-# kubectl exec -it pnw-original-direct-2-6bc54d98c4-vhxmk -- ifconfig
-eth0 Link encap:Ethernet HWaddr 0A:58:0A:F4:40:32
- inet addr:10.244.64.50 Bcast:0.0.0.0 Mask:255.255.255.0
- UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
- RX packets:6 errors:0 dropped:0 overruns:0 frame:0
- TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
- collisions:0 txqueuelen:0
- RX bytes:252 (252.0 B) TX bytes:0 (0.0 B)
-
-lo Link encap:Local Loopback
- inet addr:127.0.0.1 Mask:255.0.0.0
- UP LOOPBACK RUNNING MTU:65536 Metric:1
- RX packets:0 errors:0 dropped:0 overruns:0 frame:0
- TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
- collisions:0 txqueuelen:1000
- RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
-
-net0 Link encap:Ethernet HWaddr 0A:00:00:00:00:3F
- inet addr:172.16.34.4 Bcast:172.16.34.255 Mask:255.255.255.0
- UP BROADCAST RUNNING MULTICAST MTU:1400 Metric:1
- RX packets:14 errors:0 dropped:0 overruns:0 frame:0
- TX packets:10 errors:0 dropped:0 overruns:0 carrier:0
- collisions:0 txqueuelen:0
- RX bytes:1092 (1.0 KiB) TX bytes:924 (924.0 B)
-# kubectl exec -it pnw-original-direct-2-6bc54d98c4-vhxmk -- ping -I net0 172.16.34.3 -c 2
-PING 172.16.34.3 (172.16.34.3): 56 data bytes
-64 bytes from 172.16.34.3: seq=0 ttl=64 time=0.097 ms
-64 bytes from 172.16.34.3: seq=1 ttl=64 time=0.096 ms
-
---- 172.16.34.3 ping statistics ---
-2 packets transmitted, 2 packets received, 0% packet loss
-round-trip min/avg/max = 0.096/0.096/0.097 ms
-```
-In VM2, ping the pod1 in the VM1
-$ ping -I eth0.101 172.16.34.2 -c 2
-```
-PING 172.16.34.2 (172.16.34.2) from 172.16.34.2 eth0.101: 56(84) bytes of data.
-64 bytes from 172.16.34.2: icmp_seq=1 ttl=64 time=0.057 ms
-64 bytes from 172.16.34.2: icmp_seq=2 ttl=64 time=0.065 ms
-
---- 172.16.34.2 ping statistics ---
-2 packets transmitted, 2 received, 0% packet loss, time 1010ms
-rtt min/avg/max/mdev = 0.057/0.061/0.065/0.004 ms
-```
-## Direct provider networking between VMs
-![Direct provider network testing](images/direct-provider-networking.png)
-
-# Summary
-
-This is only the test scenario for development and also for verification purpose. Work in progress to make the end2end testing
-automatic.
+# Example Setup and Testing
+
+In this `./example` folder, OVN4NFV-plugin daemonset yaml file, VLAN and direct Provider networking testing scenarios and required sample
+configuration file.
+
+# Quick start
+
+## Creating sandbox environment
+
+Create 2 VMs in your setup. The recommended way of creating the sandbox is through KUD. Please follow the all-in-one setup in KUD. This
+will create two VMs and provide the required sandbox.
+
+## VLAN Tagging Provider network testing
+
+The following setup have 2 VMs with one VM having Kubernetes setup with OVN4NFVk8s plugin and another VM act as provider networking to do
+testing.
+
+Run the following yaml file to test teh vlan tagging provider networking. User required to change the `providerInterfaceName` and
+`nodeLabelList` in the `ovn4nfv_vlan_pn.yml`
+
+```
+kubectl apply -f ovn4nfv_vlan_pn.yml
+```
+This create Vlan tagging interface eth0.100 in VM1 and two pods for the deployment `pnw-original-vlan-1` and `pnw-original-vlan-2` in VM.
+Test the interface details and inter network communication between `net0` interfaces
+```
+# kubectl exec -it pnw-original-vlan-1-6c67574cd7-mv57g -- ifconfig
+eth0 Link encap:Ethernet HWaddr 0A:58:0A:F4:40:30
+ inet addr:10.244.64.48 Bcast:0.0.0.0 Mask:255.255.255.0
+ UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
+ RX packets:11 errors:0 dropped:0 overruns:0 frame:0
+ TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
+ collisions:0 txqueuelen:0
+ RX bytes:462 (462.0 B) TX bytes:0 (0.0 B)
+
+lo Link encap:Local Loopback
+ inet addr:127.0.0.1 Mask:255.0.0.0
+ UP LOOPBACK RUNNING MTU:65536 Metric:1
+ RX packets:0 errors:0 dropped:0 overruns:0 frame:0
+ TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
+ collisions:0 txqueuelen:1000
+ RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
+
+net0 Link encap:Ethernet HWaddr 0A:00:00:00:00:3C
+ inet addr:172.16.33.3 Bcast:172.16.33.255 Mask:255.255.255.0
+ UP BROADCAST RUNNING MULTICAST MTU:1400 Metric:1
+ RX packets:10 errors:0 dropped:0 overruns:0 frame:0
+ TX packets:9 errors:0 dropped:0 overruns:0 carrier:0
+ collisions:0 txqueuelen:0
+ RX bytes:868 (868.0 B) TX bytes:826 (826.0 B)
+# kubectl exec -it pnw-original-vlan-2-5bd9ffbf5c-4gcgq -- ifconfig
+eth0 Link encap:Ethernet HWaddr 0A:58:0A:F4:40:31
+ inet addr:10.244.64.49 Bcast:0.0.0.0 Mask:255.255.255.0
+ UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
+ RX packets:11 errors:0 dropped:0 overruns:0 frame:0
+ TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
+ collisions:0 txqueuelen:0
+ RX bytes:462 (462.0 B) TX bytes:0 (0.0 B)
+
+lo Link encap:Local Loopback
+ inet addr:127.0.0.1 Mask:255.0.0.0
+ UP LOOPBACK RUNNING MTU:65536 Metric:1
+ RX packets:0 errors:0 dropped:0 overruns:0 frame:0
+ TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
+ collisions:0 txqueuelen:1000
+ RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
+
+net0 Link encap:Ethernet HWaddr 0A:00:00:00:00:3D
+ inet addr:172.16.33.4 Bcast:172.16.33.255 Mask:255.255.255.0
+ UP BROADCAST RUNNING MULTICAST MTU:1400 Metric:1
+ RX packets:25 errors:0 dropped:0 overruns:0 frame:0
+ TX packets:25 errors:0 dropped:0 overruns:0 carrier:0
+ collisions:0 txqueuelen:0
+ RX bytes:2282 (2.2 KiB) TX bytes:2282 (2.2 KiB)
+```
+Test the ping operation between the vlan interfaces
+```
+# kubectl exec -it pnw-original-vlan-2-5bd9ffbf5c-4gcgq -- ping -I net0 172.16.33.3 -c 2
+PING 172.16.33.3 (172.16.33.3): 56 data bytes
+64 bytes from 172.16.33.3: seq=0 ttl=64 time=0.092 ms
+64 bytes from 172.16.33.3: seq=1 ttl=64 time=0.105 ms
+
+--- 172.16.33.3 ping statistics ---
+2 packets transmitted, 2 packets received, 0% packet loss
+round-trip min/avg/max = 0.092/0.098/0.105 ms
+```
+In VM2 create a Vlan tagging for eth0 as eth0.100 and configure the IP address as
+```
+# ifconfig eth0.100
+eth0.100: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
+ inet 172.16.33.2 netmask 255.255.255.0 broadcast 172.16.33.255
+ ether 52:54:00:f4:ee:d9 txqueuelen 1000 (Ethernet)
+ RX packets 111 bytes 8092 (8.0 KB)
+ RX errors 0 dropped 0 overruns 0 frame 0
+ TX packets 149 bytes 12698 (12.6 KB)
+ TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
+```
+Pinging from VM2 through eth0.100 to pod 1 in VM1 should be successfull to test the VLAN tagging
+```
+# ping -I eth0.100 172.16.33.3 -c 2
+PING 172.16.33.3 (172.16.33.3) from 172.16.33.2 eth0.100: 56(84) bytes of data.
+64 bytes from 172.16.33.3: icmp_seq=1 ttl=64 time=0.382 ms
+64 bytes from 172.16.33.3: icmp_seq=2 ttl=64 time=0.347 ms
+
+--- 172.16.33.3 ping statistics ---
+2 packets transmitted, 2 received, 0% packet loss, time 1009ms
+rtt min/avg/max/mdev = 0.347/0.364/0.382/0.025 ms
+```
+## VLAN Tagging between VMs
+![vlan tagging testing](../images/vlan-tagging.png)
+
+# Direct Provider network testing
+
+The main difference between Vlan tagging and Direct provider networking is that VLAN logical interface is created and then ports are
+attached to it. In order to validate the direct provider networking connectivity, we create VLAN tagging between VM1 & VM2 and test the
+connectivity as follow.
+
+Create VLAN tagging interface eth0.101 in VM1 and VM2. Just add `providerInterfaceName: eth0.101' in Direct provider network CR.
+```
+# kubectl apply -f ovn4nfv_direct_pn.yml
+```
+Check the inter connection between direct provider network pods as follow
+```
+# kubectl exec -it pnw-original-direct-1-85f5b45fdd-qq6xc -- ifconfig
+eth0 Link encap:Ethernet HWaddr 0A:58:0A:F4:40:33
+ inet addr:10.244.64.51 Bcast:0.0.0.0 Mask:255.255.255.0
+ UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
+ RX packets:6 errors:0 dropped:0 overruns:0 frame:0
+ TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
+ collisions:0 txqueuelen:0
+ RX bytes:252 (252.0 B) TX bytes:0 (0.0 B)
+
+lo Link encap:Local Loopback
+ inet addr:127.0.0.1 Mask:255.0.0.0
+ UP LOOPBACK RUNNING MTU:65536 Metric:1
+ RX packets:0 errors:0 dropped:0 overruns:0 frame:0
+ TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
+ collisions:0 txqueuelen:1000
+ RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
+
+net0 Link encap:Ethernet HWaddr 0A:00:00:00:00:3E
+ inet addr:172.16.34.3 Bcast:172.16.34.255 Mask:255.255.255.0
+ UP BROADCAST RUNNING MULTICAST MTU:1400 Metric:1
+ RX packets:29 errors:0 dropped:0 overruns:0 frame:0
+ TX packets:26 errors:0 dropped:0 overruns:0 carrier:0
+ collisions:0 txqueuelen:0
+ RX bytes:2394 (2.3 KiB) TX bytes:2268 (2.2 KiB)
+
+# kubectl exec -it pnw-original-direct-2-6bc54d98c4-vhxmk -- ifconfig
+eth0 Link encap:Ethernet HWaddr 0A:58:0A:F4:40:32
+ inet addr:10.244.64.50 Bcast:0.0.0.0 Mask:255.255.255.0
+ UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
+ RX packets:6 errors:0 dropped:0 overruns:0 frame:0
+ TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
+ collisions:0 txqueuelen:0
+ RX bytes:252 (252.0 B) TX bytes:0 (0.0 B)
+
+lo Link encap:Local Loopback
+ inet addr:127.0.0.1 Mask:255.0.0.0
+ UP LOOPBACK RUNNING MTU:65536 Metric:1
+ RX packets:0 errors:0 dropped:0 overruns:0 frame:0
+ TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
+ collisions:0 txqueuelen:1000
+ RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
+
+net0 Link encap:Ethernet HWaddr 0A:00:00:00:00:3F
+ inet addr:172.16.34.4 Bcast:172.16.34.255 Mask:255.255.255.0
+ UP BROADCAST RUNNING MULTICAST MTU:1400 Metric:1
+ RX packets:14 errors:0 dropped:0 overruns:0 frame:0
+ TX packets:10 errors:0 dropped:0 overruns:0 carrier:0
+ collisions:0 txqueuelen:0
+ RX bytes:1092 (1.0 KiB) TX bytes:924 (924.0 B)
+# kubectl exec -it pnw-original-direct-2-6bc54d98c4-vhxmk -- ping -I net0 172.16.34.3 -c 2
+PING 172.16.34.3 (172.16.34.3): 56 data bytes
+64 bytes from 172.16.34.3: seq=0 ttl=64 time=0.097 ms
+64 bytes from 172.16.34.3: seq=1 ttl=64 time=0.096 ms
+
+--- 172.16.34.3 ping statistics ---
+2 packets transmitted, 2 packets received, 0% packet loss
+round-trip min/avg/max = 0.096/0.096/0.097 ms
+```
+In VM2, ping the pod1 in the VM1
+$ ping -I eth0.101 172.16.34.2 -c 2
+```
+PING 172.16.34.2 (172.16.34.2) from 172.16.34.2 eth0.101: 56(84) bytes of data.
+64 bytes from 172.16.34.2: icmp_seq=1 ttl=64 time=0.057 ms
+64 bytes from 172.16.34.2: icmp_seq=2 ttl=64 time=0.065 ms
+
+--- 172.16.34.2 ping statistics ---
+2 packets transmitted, 2 received, 0% packet loss, time 1010ms
+rtt min/avg/max/mdev = 0.057/0.061/0.065/0.004 ms
+```
+## Direct provider networking between VMs
+![Direct provider network testing](../images/direct-provider-networking.png)
+
+# Summary
+
+This is only the test scenario for development and also for verification purpose. Work in progress to make the end2end testing
+automatic.
diff --git a/example/images/direct-provider-networking.png b/images/direct-provider-networking.png
index ffed71d..ffed71d 100644
--- a/example/images/direct-provider-networking.png
+++ b/images/direct-provider-networking.png
Binary files differ
diff --git a/images/ovn4nfv-k8s-arch-block.png b/images/ovn4nfv-k8s-arch-block.png
new file mode 100644
index 0000000..7e2960b
--- /dev/null
+++ b/images/ovn4nfv-k8s-arch-block.png
Binary files differ
diff --git a/images/ovn4nfv-network-traffic.png b/images/ovn4nfv-network-traffic.png
new file mode 100644
index 0000000..992b194
--- /dev/null
+++ b/images/ovn4nfv-network-traffic.png
Binary files differ
diff --git a/images/sfc-with-sdewan.png b/images/sfc-with-sdewan.png
new file mode 100644
index 0000000..48da66f
--- /dev/null
+++ b/images/sfc-with-sdewan.png
Binary files differ
diff --git a/example/images/vlan-tagging.png b/images/vlan-tagging.png
index e6fe4dd..e6fe4dd 100644
--- a/example/images/vlan-tagging.png
+++ b/images/vlan-tagging.png
Binary files differ