summaryrefslogtreecommitdiffstats
path: root/docs
diff options
context:
space:
mode:
Diffstat (limited to 'docs')
-rw-r--r--docs/development/index.rst22
-rw-r--r--docs/development/openstack-networking-ovn.rst400
-rw-r--r--docs/development/ovn-for-nfv.rst23
-rw-r--r--docs/release/configuration-guide.rst187
-rw-r--r--docs/release/index.rst21
-rw-r--r--docs/release/release-notes.rst14
-rw-r--r--docs/testing/index.rst20
-rw-r--r--docs/testing/testing-notes.rst620
8 files changed, 1307 insertions, 0 deletions
diff --git a/docs/development/index.rst b/docs/development/index.rst
new file mode 100644
index 0000000..5cf12b5
--- /dev/null
+++ b/docs/development/index.rst
@@ -0,0 +1,22 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License. http://creativecommons.org/licenses/by/4.0
+
+====================
+OVN4NFV Development
+====================
+
+:Project: Ovn4Nfv, https://wiki.opnfv.org/display/PROJ/Ovn4nfv
+
+:Editors: Vikram Dham (Huawei)
+:Authors: Vikram Dham (Huawei)
+ Trinath Somanchi (NXP)
+ Prakash Ramchandran (Huawei)
+
+:Abstract: This document provides an overview of networking-ovn,its utilization
+ into NFV and outlines OpenStack official release.
+
+.. toctree::
+ :maxdepth: 2
+
+ openstack-networking-ovn.rst
+ ovn-for-nfv.rst
diff --git a/docs/development/openstack-networking-ovn.rst b/docs/development/openstack-networking-ovn.rst
new file mode 100644
index 0000000..9fa3766
--- /dev/null
+++ b/docs/development/openstack-networking-ovn.rst
@@ -0,0 +1,400 @@
+===============
+OVN information
+===============
+
+The original OVN project announcement can be found here:
+
+* http://networkheresy.com/2015/01/13/ovn-bringing-native-virtual-networking-to-ovs/
+
+The OVN architecture is described here:
+
+* http://openvswitch.org/support/dist-docs/ovn-architecture.7.html
+
+Here are two tutorials that help with learning different aspects of OVN:
+
+* http://blog.spinhirne.com/p/blog-series.html#introToOVN
+* http://docs.openvswitch.org/en/latest/tutorials/ovn-sandbox/
+
+There is also an in depth tutorial on using OVN with OpenStack:
+
+* http://docs.openvswitch.org/en/latest/tutorials/ovn-openstack/
+
+OVN DB schemas and other man pages:
+
+* http://openvswitch.org/support/dist-docs/ovn-nb.5.html
+* http://openvswitch.org/support/dist-docs/ovn-sb.5.html
+* http://openvswitch.org/support/dist-docs/ovn-nbctl.8.html
+* http://openvswitch.org/support/dist-docs/ovn-sbctl.8.html
+* http://openvswitch.org/support/dist-docs/ovn-northd.8.html
+* http://openvswitch.org/support/dist-docs/ovn-controller.8.html
+* http://openvswitch.org/support/dist-docs/ovn-controller-vtep.8.html
+
+or find a full list of OVS and OVN man pages here:
+
+* http://docs.openvswitch.org/en/latest/ref/
+
+The openvswitch web page includes a list of presentations, some of which are
+about OVN:
+
+* http://openvswitch.org/support/
+
+Here are some direct links to past OVN presentations:
+
+* `OVN talk at OpenStack Summit in Boston, Spring 2017
+ <https://www.youtube.com/watch?v=sgc7myiX6ts>`_
+* `OVN talk at OpenStack Summit in Barcelona, Fall 2016
+ <https://www.youtube.com/watch?v=q3cJ6ezPnCU>`_
+* `OVN talk at OpenStack Summit in Austin, Spring 2016
+ <https://www.youtube.com/watch?v=okralc7LrZo>`_
+* OVN Project Update at the OpenStack Summit in Tokyo, Fall 2015 -
+ `Slides <http://openvswitch.org/support/slides/OVN_Tokyo.pdf>`__ -
+ `Video <https://www.youtube.com/watch?v=3IrG2xghJjs>`__
+* OVN at OpenStack Summit in Vancouver, Sping 2015 -
+ `Slides <http://openvswitch.org/support/slides/OVN-Vancouver.pdf>`__ -
+ `Video <https://www.youtube.com/watch?v=kEzXTq2fPDg>`__
+* `OVS Conference 2015 <https://www.youtube.com/watch?v=JLGZOYi_Cqc>`_
+
+These blog resources may also help with testing and understanding OVN:
+
+* http://networkop.co.uk/blog/2016/11/27/ovn-part1/
+* http://networkop.co.uk/blog/2016/12/10/ovn-part2/
+* https://blog.russellbryant.net/2016/12/19/comparing-openstack-neutron-ml2ovs-and-ovn-control-plane/
+* https://blog.russellbryant.net/2016/11/11/ovn-logical-flows-and-ovn-trace/
+* https://blog.russellbryant.net/2016/09/29/ovs-2-6-and-the-first-release-of-ovn/
+* http://galsagie.github.io/2015/11/23/ovn-l3-deepdive/
+* http://blog.russellbryant.net/2015/10/22/openstack-security-groups-using-ovn-acls/
+* http://galsagie.github.io/sdn/openstack/ovs/2015/05/30/ovn-deep-dive/
+* http://blog.russellbryant.net/2015/05/14/an-ez-bake-ovn-for-openstack/
+* http://galsagie.github.io/sdn/openstack/ovs/2015/04/26/ovn-containers/
+* http://blog.russellbryant.net/2015/04/21/ovn-and-openstack-status-2015-04-21/
+* http://blog.russellbryant.net/2015/04/08/ovn-and-openstack-integration-development-update/
+
+
+
+Install & Configuration
+=======================
+
+The ``networking-ovn`` repository includes integration with DevStack that
+enables creation of a simple Open Virtual Network (OVN) development and test
+environment. This document discusses what is required for manual installation
+or integration into a production OpenStack deployment tool of conventional
+architectures that include the following types of nodes:
+
+* Controller - Runs OpenStack control plane services such as REST APIs
+ and databases.
+
+* Network - Runs the layer-2, layer-3 (routing), DHCP, and metadata agents
+ for the Networking service. Some agents optional. Usually provides
+ connectivity between provider (public) and project (private) networks
+ via NAT and floating IP addresses.
+
+ .. note::
+
+ Some tools deploy these services on controller nodes.
+
+* Compute - Runs the hypervisor and layer-2 agent for the Networking
+ service.
+
+Packaging
+---------
+
+Open vSwitch (OVS) includes OVN beginning with version 2.5 and considers
+it experimental. The Networking service integration for OVN uses an
+independent package, typically ``networking-ovn``.
+
+Building OVS from source automatically installs OVN. For deployment tools
+using distribution packages, the ``openvswitch-ovn`` package for RHEL/CentOS
+and compatible distributions automatically installs ``openvswitch`` as a
+dependency. Ubuntu/Debian includes ``ovn-central``, ``ovn-host``,
+``ovn-docker``, and ``ovn-common`` packages that pull in the appropriate Open
+vSwitch dependencies as needed.
+
+A ``python-networking-ovn`` RPM may be obtained for Fedora or CentOS from
+the RDO project. A package based on the ``master`` branch of
+``networking-ovn`` can be found at https://trunk.rdoproject.org/.
+
+Fedora and CentOS RPM builds of OVS and OVN from the ``master`` branch of
+``ovs`` can be found in this COPR repository:
+https://copr.fedorainfracloud.org/coprs/leifmadsen/ovs-master/.
+
+Controller nodes
+----------------
+
+Each controller node runs the OVS service (including dependent services such
+as ``ovsdb-server``) and the ``ovn-northd`` service. However, only a single
+instance of the ``ovsdb-server`` and ``ovn-northd`` services can operate in
+a deployment. However, deployment tools can implement active/passive
+high-availability using a management tool that monitors service health
+and automatically starts these services on another node after failure of the
+primary node. See the :ref:`faq` for more information.
+
+#. Install the ``openvswitch-ovn`` and ``networking-ovn`` packages.
+
+#. Start the OVS service. The central OVS service starts the ``ovsdb-server``
+ service that manages OVN databases.
+
+ Using the *systemd* unit:
+
+ .. code-block:: console
+
+ # systemctl start openvswitch
+
+ Using the ``ovs-ctl`` script:
+
+ .. code-block:: console
+
+ # /usr/share/openvswitch/scripts/ovs-ctl start --system-id="random"
+
+#. Configure the ``ovsdb-server`` component. By default, the ``ovsdb-server``
+ service only permits local access to databases via Unix socket. However,
+ OVN services on compute nodes require access to these databases.
+
+ * Permit remote database access.
+
+ .. code-block:: console
+
+ # ovs-appctl -t ovsdb-server ovsdb-server/add-remote ptcp:6640:IP_ADDRESS
+
+ Replace ``IP_ADDRESS`` with the IP address of the management network
+ interface on the controller node.
+
+ .. note::
+
+ Permit remote access to TCP port 6640 on any host firewall.
+
+#. Start the ``ovn-northd`` service.
+
+ Using the *systemd* unit:
+
+ .. code-block:: console
+
+ # systemctl start ovn-northd
+
+ Using the ``ovn-ctl`` script:
+
+ .. code-block:: console
+
+ # /usr/share/openvswitch/scripts/ovn-ctl start_northd
+
+ Options for *start_northd*:
+
+ .. code-block:: console
+
+ # /usr/share/openvswitch/scripts/ovn-ctl start_northd --help
+ # ...
+ # DB_NB_SOCK="/usr/local/etc/openvswitch/nb_db.sock"
+ # DB_NB_PID="/usr/local/etc/openvswitch/ovnnb_db.pid"
+ # DB_SB_SOCK="usr/local/etc/openvswitch/sb_db.sock"
+ # DB_SB_PID="/usr/local/etc/openvswitch/ovnsb_db.pid"
+ # ...
+
+#. Configure the Networking server component. The Networking service
+ implements OVN as an ML2 driver. Edit the ``/etc/neutron/neutron.conf``
+ file:
+
+ * Enable the ML2 core plug-in.
+
+ .. code-block:: ini
+
+ [DEFAULT]
+ ...
+ core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
+
+ * Enable the OVN layer-3 service.
+
+ .. code-block:: ini
+
+ [DEFAULT]
+ ...
+ service_plugins = networking_ovn.l3.l3_ovn.OVNL3RouterPlugin
+
+#. Configure the ML2 plug-in. Edit the
+ ``/etc/neutron/plugins/ml2/ml2_conf.ini`` file:
+
+ * Configure the OVN mechanism driver, network type drivers, self-service
+ (tenant) network types, and enable the port security extension.
+
+ .. code-block:: ini
+
+ [ml2]
+ ...
+ mechanism_drivers = ovn
+ type_drivers = local,flat,vlan,geneve
+ tenant_network_types = geneve
+ extension_drivers = port_security
+ overlay_ip_version = 4
+
+ .. note::
+
+ To enable VLAN self-service networks, add ``vlan`` to the
+ ``tenant_network_types`` option. The first network type
+ in the list becomes the default self-service network type.
+
+ To use IPv6 for all overlay (tunnel) network endpoints,
+ set the ``overlay_ip_version`` option to ``6``.
+
+ * Configure the Geneve ID range and maximum header size. The IP version
+ overhead (20 bytes for IPv4 (default) or 40 bytes for IPv6) is added
+ to the maximum header size based on the ML2 ``overlay_ip_version``
+ option.
+
+ .. code-block:: ini
+
+ [ml2_type_geneve]
+ ...
+ vni_ranges = 1:65536
+ max_header_size = 38
+
+ .. note::
+
+ The Networking service uses the ``vni_ranges`` option to allocate
+ network segments. However, OVN ignores the actual values. Thus, the ID
+ range only determines the quantity of Geneve networks in the
+ environment. For example, a range of ``5001:6000`` defines a maximum
+ of 1000 Geneve networks.
+
+ * Optionally, enable support for VLAN provider and self-service
+ networks on one or more physical networks. If you specify only
+ the physical network, only administrative (privileged) users can
+ manage VLAN networks. Additionally specifying a VLAN ID range for
+ a physical network enables regular (non-privileged) users to
+ manage VLAN networks. The Networking service allocates the VLAN ID
+ for each self-service network using the VLAN ID range for the
+ physical network.
+
+ .. code-block:: ini
+
+ [ml2_type_vlan]
+ ...
+ network_vlan_ranges = PHYSICAL_NETWORK:MIN_VLAN_ID:MAX_VLAN_ID
+
+ Replace ``PHYSICAL_NETWORK`` with the physical network name and
+ optionally define the minimum and maximum VLAN IDs. Use a comma
+ to separate each physical network.
+
+ For example, to enable support for administrative VLAN networks
+ on the ``physnet1`` network and self-service VLAN networks on
+ the ``physnet2`` network using VLAN IDs 1001 to 2000:
+
+ .. code-block:: ini
+
+ network_vlan_ranges = physnet1,physnet2:1001:2000
+
+ * Enable security groups.
+
+ .. code-block:: ini
+
+ [securitygroup]
+ ...
+ enable_security_group = true
+
+ .. note::
+
+ The ``firewall_driver`` option under ``[securitygroup]`` is ignored
+ since the OVN ML2 driver itself handles security groups.
+
+ * Configure OVS database access and L3 scheduler
+
+ .. code-block:: ini
+
+ [ovn]
+ ...
+ ovn_nb_connection = tcp:IP_ADDRESS:6641
+ ovn_sb_connection = tcp:IP_ADDRESS:6642
+ ovn_l3_scheduler = OVN_L3_SCHEDULER
+
+ .. note::
+
+ Replace ``IP_ADDRESS`` with the IP address of the controller node that
+ runs the ``ovsdb-server`` service. Replace ``OVN_L3_SCHEDULER`` with
+ ``leastloaded`` if you want the scheduler to select a compute node with
+ the least number of gateway ports or ``chance`` if you want the
+ scheduler to randomly select a compute node from the available list of
+ compute nodes.
+
+#. Start the ``neutron-server`` service.
+
+Network nodes
+-------------
+
+Deployments using OVN native layer-3 and DHCP services do not require
+conventional network nodes because connectivity to external networks
+(including VTEP gateways) and routing occurs on compute nodes.
+
+Compute nodes
+-------------
+
+Each compute node runs the OVS and ``ovn-controller`` services. The
+``ovn-controller`` service replaces the conventional OVS layer-2 agent.
+
+#. Install the ``openvswitch-ovn`` and ``networking-ovn`` packages.
+
+#. Start the OVS service.
+
+ Using the *systemd* unit:
+
+ .. code-block:: console
+
+ # systemctl start openvswitch
+
+ Using the ``ovs-ctl`` script:
+
+ .. code-block:: console
+
+ # /usr/share/openvswitch/scripts/ovs-ctl start --system-id="random"
+
+#. Configure the OVS service.
+
+ * Use OVS databases on the controller node.
+
+ .. code-block:: console
+
+ # ovs-vsctl set open . external-ids:ovn-remote=tcp:IP_ADDRESS:6642
+
+ Replace ``IP_ADDRESS`` with the IP address of the controller node
+ that runs the ``ovsdb-server`` service.
+
+ * Enable one or more overlay network protocols. At a minimum, OVN requires
+ enabling the ``geneve`` protocol. Deployments using VTEP gateways should
+ also enable the ``vxlan`` protocol.
+
+ .. code-block:: console
+
+ # ovs-vsctl set open . external-ids:ovn-encap-type=geneve,vxlan
+
+ .. note::
+
+ Deployments without VTEP gateways can safely enable both protocols.
+
+ * Configure the overlay network local endpoint IP address.
+
+ .. code-block:: console
+
+ # ovs-vsctl set open . external-ids:ovn-encap-ip=IP_ADDRESS
+
+ Replace ``IP_ADDRESS`` with the IP address of the overlay network
+ interface on the compute node.
+
+#. Start the ``ovn-controller`` service.
+
+ Using the *systemd* unit:
+
+ .. code-block:: console
+
+ # systemctl start ovn-controller
+
+ Using the ``ovn-ctl`` script:
+
+ .. code-block:: console
+
+ # /usr/share/openvswitch/scripts/ovn-ctl start_controller
+
+Verify operation
+----------------
+
+#. Each compute node should contain an ``ovn-controller`` instance.
+
+ .. code-block:: console
+
+ # ovn-sbctl show
+ <output>
diff --git a/docs/development/ovn-for-nfv.rst b/docs/development/ovn-for-nfv.rst
new file mode 100644
index 0000000..3ae9c99
--- /dev/null
+++ b/docs/development/ovn-for-nfv.rst
@@ -0,0 +1,23 @@
+OVN4NFV Project
+===============
+OVN complements the existing capabilities of OVS to add native support for
+virtual network abstractions such as virtual L2 & L3 overlays, L3 routing
+and security groups. Instead of treating ovsdb and Open Flow actions
+separately, OVN provides simpler interface for managing virtual networks.
+
+Besides the simpler interface, OVN takes care of transforming simple flow
+rules of virtual network to complex Open Flow rules on the Open vSwitches
+involved. The Openstack project networking-ovn implements the neutron api
+using OVN. As part of ovn4nfv project we would like to enable OVN along
+with the openstack neutron plugin networking-ovn as a deployable network
+control component in the OPNFV build. This would make it easier to manage
+virtual networks and push more of network intelligence to the edge onto the
+compute nodes. Since OVN has inherent support for containers, this would
+allow OPNFV to orchestrate container VNFs.
+
+Further this will make the controller architecture much more simpler and
+scalable by placing the controller (ovn-controller) next to the Open vSwitch.
+
+OVN for OPNFV at OPNFV Design Summit, Nov, 2015
+ - https://wiki.opnfv.org/_media/events/ovn-opnfv-summit2015.pdf
+
diff --git a/docs/release/configuration-guide.rst b/docs/release/configuration-guide.rst
new file mode 100644
index 0000000..12a6243
--- /dev/null
+++ b/docs/release/configuration-guide.rst
@@ -0,0 +1,187 @@
+Devstack Configuration
+======================
+
+Devstack - localrc/local.conf
+-----------------------------
+
+Sample DevStack local.conf.
+
+.. code-block:: console
+
+ #
+ # This sample file is intended to be used for your typical DevStack environment
+ # that's running all of OpenStack on a single host. This can also be used as
+ # the first host of a multi-host test environment.
+ #
+ # No changes to this sample configuration are required for this to work.
+ #
+ DATABASE_PASSWORD=password
+ RABBIT_PASSWORD=password
+ SERVICE_PASSWORD=password
+ SERVICE_TOKEN=password
+ ADMIN_PASSWORD=password
+
+ # The DevStack plugin defaults to using the ovn branch from the official ovs
+ # repo. You can optionally use a different one. For example, you may want to
+ # use the latest patches in blp's ovn branch:
+ #OVN_REPO=https://github.com/blp/ovs-reviews.git
+ #OVN_BRANCH=ovn
+
+ enable_plugin networking-ovn https://git.openstack.org/openstack/networking-ovn
+ enable_service ovn-northd
+ enable_service ovn-controller
+ enable_service networking-ovn-metadata-agent
+
+ # Use Neutron instead of nova-network
+ disable_service n-net
+ enable_service q-svc
+
+ # Disable Neutron agents not used with OVN.
+ disable_service q-agt
+ disable_service q-l3
+ disable_service q-dhcp
+ disable_service q-meta
+
+ # Horizon (the web UI) is enabled by default. You may want to disable
+ # it here to speed up DevStack a bit.
+ enable_service horizon
+ #disable_service horizon
+
+ # Cinder (OpenStack Block Storage) is disabled by default to speed up
+ # DevStack a bit. You may enable it here if you would like to use it.
+ disable_service cinder c-sch c-api c-vol
+ #enable_service cinder c-sch c-api c-vol
+
+ # How to connect to ovsdb-server hosting the OVN NB database.
+ #OVN_NB_REMOTE=tcp:$SERVICE_HOST:6641
+
+ # How to connect to ovsdb-server hosting the OVN SB database.
+ #OVN_SB_REMOTE=tcp:$SERVICE_HOST:6642
+
+ # A UUID to uniquely identify this system. If one is not specified, a random
+ # one will be generated and saved in the file 'ovn-uuid' for re-use in future
+ # DevStack runs.
+ #OVN_UUID=
+
+ # If using the OVN native layer-3 service, choose a router scheduler to
+ # manage the distribution of router gateways on hypervisors/chassis.
+ # Default value is leastloaded.
+ #OVN_L3_SCHEDULER=leastloaded
+
+ # Whether or not to build custom openvswitch kernel modules from the ovs git
+ # tree. This is enabled by default. This is required unless your distro kernel
+ # includes ovs+conntrack support. This support was first released in Linux 4.3,
+ # and will likely be backported by some distros.
+ #OVN_BUILD_MODULES=False
+
+ # Enable services, these services depend on neutron plugin.
+ #enable_plugin neutron https://git.openstack.org/openstack/neutron
+ #enable_service q-qos
+ #enable_service q-trunk
+
+ # Skydive
+ #enable_plugin skydive https://github.com/redhat-cip/skydive.git
+ #enable_service skydive-analyzer
+ #enable_service skydive-agent
+
+ # If you want to enable a provider network instead of the default private
+ # network after your DevStack environment installation, you *must* set
+ # the Q_USE_PROVIDER_NETWORKING to True, and also give FIXED_RANGE,
+ # NETWORK_GATEWAY and ALLOCATION_POOL option to the correct value that can
+ # be used in your environment. Specifying Q_AGENT is needed to allow devstack
+ # to run various "ip link set" and "ovs-vsctl" commands for the provider
+ # network setup.
+ #Q_AGENT=openvswitch
+ #Q_USE_PROVIDER_NETWORKING=True
+ #PHYSICAL_NETWORK=providernet
+ #PROVIDER_NETWORK_TYPE=flat
+ #PUBLIC_INTERFACE=<public interface>
+ #OVS_PHYSICAL_BRIDGE=br-provider
+ #PROVIDER_SUBNET_NAME=provider-subnet
+ # use the following for IPv4
+ #IP_VERSION=4
+ #FIXED_RANGE=<CIDR for the Provider Network>
+ #NETWORK_GATEWAY=<Provider Network Gateway>
+ #ALLOCATION_POOL=<Provider Network Allocation Pool>
+ # use the following for IPv4+IPv6
+ #IP_VERSION=4+6
+ #FIXED_RANGE=<CIDR for the Provider Network>
+ #NETWORK_GATEWAY=<Provider Network Gateway>
+ #ALLOCATION_POOL=<Provider Network Allocation Pool>
+ # IPV6_PROVIDER_FIXED_RANGE=<v6 CDIR for the Provider Network>
+ # IPV6_PROVIDER_NETWORK_GATEWAY=<v6 Gateway for the Provider Network>
+
+ # If you wish to use the provider network for public access to the cloud,
+ # set the following
+ #Q_USE_PROVIDERNET_FOR_PUBLIC=True
+ #PUBLIC_NETWORK_NAME=<Provider network name>
+ #PUBLIC_NETWORK_GATEWAY=<Provider network gateway>
+ #PUBLIC_PHYSICAL_NETWORK=<Provider network name>
+ #IP_VERSION=4
+ #PUBLIC_SUBNET_NAME=<provider subnet name>
+ #Q_FLOATING_ALLOCATION_POOL=<Provider Network Allocation Pool>
+ #FLOATING_RANGE=<CIDR for the Provider Network>
+
+ # NOTE: DO NOT MOVE THESE SECTIONS FROM THE END OF THIS FILE
+ # IF YOU DO, THEY WON'T WORK!!!!!
+ #
+
+ # Enable Nova automatic host discovery for cell every 2 seconds
+ # Only needed in case of multinode devstack, as otherwise there will be issues
+ # when the 2nd compute node goes online.
+ discover_hosts_in_cells_interval=2
+
+
+Neutron - metadata_agent.ini
+--------------------------
+
+The following configuration options in /etc/neutron/metadata_agent.ini
+are required when OVN is enabled in OpenStack neutron.
+
+.. code-block:: console
+
+ ...
+ ...
+ [ovs]
+ #
+ # From networking_ovn.metadata.agent
+ #
+
+ # The connection string for the native OVSDB backend.
+ # Use tcp:IP:PORT for TCP connection.
+ # Use unix:FILE for unix domain socket connection. (string value)
+ #ovsdb_connection = unix:/usr/local/var/run/openvswitch/db.sock
+
+ # Timeout in seconds for the OVSDB connection transaction (integer value)
+ #ovsdb_connection_timeout = 180
+
+ [ovn]
+ ovn_sb_connection = tcp:<controller-ip>:<port>
+
+
+Neutron - neutron.conf
+--------------------
+
+The following configuration changes are required in /etc/neutron/neutron.conf
+
+.. code-block:: console
+
+ [DEFAULT]
+ service_plugins = networking_ovn.l3.l3_ovn.OVNL3RouterPlugin
+
+Neutron - ml2_conf.ini
+--------------------
+
+The following configuration changes are required in /etc/neutron/plugins/ml2/ml2_conf.ini
+
+.. code-block:: console
+
+ [ml2]
+ mechanism_drivers = ovn,logger
+
+ [ovn]
+ ovn_metadata_enabled = True
+ ovn_l3_scheduler = leastloaded
+ neutron_sync_mode = log
+ ovn_sb_connection = tcp:<controller-ip>:<port>
+ ovn_nb_connection = tcp:<controller-ip>:<port>
diff --git a/docs/release/index.rst b/docs/release/index.rst
new file mode 100644
index 0000000..24da448
--- /dev/null
+++ b/docs/release/index.rst
@@ -0,0 +1,21 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License. http://creativecommons.org/licenses/by/4.0
+
+==========================
+OVN4NFV Release Notes
+==========================
+
+:Project: OVN4NFV, https://wiki.opnfv.org/display/PROJ/Ovn4nfv
+
+:Editors: Vikram Dham (Huawei)
+:Authors: Vikram Dham (Huawei)
+ Trinath Somanchi (NXP)
+ Prakash Ramchandran (Huawei)
+
+:Abstract: OVN4NFV Release Notes.
+
+.. toctree::
+ :maxdepth: 1
+
+ release-notes.rst
+ configuration-guide.rst
diff --git a/docs/release/release-notes.rst b/docs/release/release-notes.rst
new file mode 100644
index 0000000..9d8a978
--- /dev/null
+++ b/docs/release/release-notes.rst
@@ -0,0 +1,14 @@
+OVN4NFV Release notes
+=====================
+
+Release: Euphrates
+
+Installer Support
+------------------
+
+Features
+--------
+
+
+
+
diff --git a/docs/testing/index.rst b/docs/testing/index.rst
new file mode 100644
index 0000000..b0d0585
--- /dev/null
+++ b/docs/testing/index.rst
@@ -0,0 +1,20 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License. http://creativecommons.org/licenses/by/4.0
+
+==========================
+OVN4NFV Testing Notes
+==========================
+
+:Project: OVN4NFV, https://wiki.opnfv.org/display/PROJ/Ovn4nfv
+
+:Editors: Vikram Dham (Huawei)
+:Authors: Vikram Dham (Huawei)
+ Trinath Somanchi (NXP)
+ Prakash Ramchandran (Huawei)
+
+:Abstract: OVN4NFV Release Notes.
+
+.. toctree::
+ :maxdepth: 1
+
+ testing-notes.rst
diff --git a/docs/testing/testing-notes.rst b/docs/testing/testing-notes.rst
new file mode 100644
index 0000000..d5e2bfd
--- /dev/null
+++ b/docs/testing/testing-notes.rst
@@ -0,0 +1,620 @@
+Testing with DevStack
+=====================
+
+This document describes how to test OpenStack with OVN using DevStack. We will
+start by describing how to test on a single host.
+
+Single Node Test Environment
+----------------------------
+
+1. Create a test system.
+
+It's best to use a throwaway dev system for running DevStack. Your best bet is
+to use either CentOS 7 or the latest Ubuntu LTS (16.04, Xenial).
+
+2. Create the ``stack`` user.
+
+::
+
+ $ git clone https://git.openstack.org/openstack-dev/devstack.git
+ $ sudo ./devstack/tools/create-stack-user.sh
+
+3. Switch to the ``stack`` user and clone DevStack and networking-ovn.
+
+::
+
+ $ sudo su - stack
+ $ git clone https://git.openstack.org/openstack-dev/devstack.git
+ $ git clone https://git.openstack.org/openstack/networking-ovn.git
+
+4. Configure DevStack to use networking-ovn.
+
+networking-ovn comes with a sample DevStack configuration file you can start
+with. For example, you may want to set some values for the various PASSWORD
+variables in that file so DevStack doesn't have to prompt you for them. Feel
+free to edit it if you'd like, but it should work as-is.
+
+::
+
+ $ cd devstack
+ $ cp ../networking-ovn/devstack/local.conf.sample local.conf
+
+5. Run DevStack.
+
+This is going to take a while. It installs a bunch of packages, clones a bunch
+of git repos, and installs everything from these git repos.
+
+::
+
+ $ ./stack.sh
+
+Once DevStack completes successfully, you should see output that looks
+something like this::
+
+ This is your host IP address: 172.16.189.6
+ This is your host IPv6 address: ::1
+ Horizon is now available at http://172.16.189.6/dashboard
+ Keystone is serving at http://172.16.189.6/identity/
+ The default users are: admin and demo
+ The password: password
+ 2017-03-09 15:10:54.117 | stack.sh completed in 2110 seconds.
+
+Environment Variables
+---------------------
+
+Once DevStack finishes successfully, we're ready to start interacting with
+OpenStack APIs. OpenStack provides a set of command line tools for interacting
+with these APIs. DevStack provides a file you can source to set up the right
+environment variables to make the OpenStack command line tools work.
+
+::
+
+ $ . openrc
+
+If you're curious what environment variables are set, they generally start with
+an OS prefix::
+
+ $ env | grep OS
+ OS_REGION_NAME=RegionOne
+ OS_IDENTITY_API_VERSION=2.0
+ OS_PASSWORD=password
+ OS_AUTH_URL=http://192.168.122.8:5000/v2.0
+ OS_USERNAME=demo
+ OS_TENANT_NAME=demo
+ OS_VOLUME_API_VERSION=2
+ OS_CACERT=/opt/stack/data/CA/int-ca/ca-chain.pem
+ OS_NO_CACHE=1
+
+Default Network Configuration
+-----------------------------
+
+By default, DevStack creates networks called ``private`` and ``public``.
+Run the following command to see the existing networks::
+
+ $ openstack network list
+ +--------------------------------------+---------+----------------------------------------------------------------------------+
+ | ID | Name | Subnets |
+ +--------------------------------------+---------+----------------------------------------------------------------------------+
+ | 40080dad-0064-480a-b1b0-592ae51c1471 | private | 5ff81545-7939-4ae0-8365-1658d45fa85c, da34f952-3bfc-45bb-b062-d2d973c1a751 |
+ | 7ec986dd-aae4-40b5-86cf-8668feeeab67 | public | 60d0c146-a29b-4cd3-bd90-3745603b1a4b, f010c309-09be-4af2-80d6-e6af9c78bae7 |
+ +--------------------------------------+---------+----------------------------------------------------------------------------+
+
+A Neutron network is implemented as an OVN logical switch. networking-ovn
+creates logical switches with a name in the format neutron-<network UUID>.
+We can use ``ovn-nbctl`` to list the configured logical switches and see that
+their names correlate with the output from ``neutron net-list``::
+
+ $ ovn-nbctl ls-list
+ 71206f5c-b0e6-49ce-b572-eb2e964b2c4e (neutron-40080dad-0064-480a-b1b0-592ae51c1471)
+ 8d8270e7-fd51-416f-ae85-16565200b8a4 (neutron-7ec986dd-aae4-40b5-86cf-8668feeeab67)
+
+ $ ovn-nbctl get Logical_Switch neutron-40080dad-0064-480a-b1b0-592ae51c1471 external_ids
+ {"neutron:network_name"=private}
+
+Booting VMs
+-----------
+
+In this section we'll go through the steps to create two VMs that have a
+virtual NIC attached to the ``private`` Neutron network.
+
+DevStack uses libvirt as the Nova backend by default. If KVM is available, it
+will be used. Otherwise, it will just run qemu emulated guests. This is
+perfectly fine for our testing, as we only need these VMs to be able to send
+and receive a small amount of traffic so performance is not very important.
+
+1. Get the Network UUID.
+
+Start by getting the UUID for the ``private`` network from the output of
+``neutron net-list`` from earlier and save it off::
+
+ $ PRIVATE_NET_ID=40080dad-0064-480a-b1b0-592ae51c1471
+
+2. Create an SSH keypair.
+
+Next create an SSH keypair in Nova. Later, when we boot a VM, we'll ask that
+the public key be put in the VM so we can SSH into it.
+
+::
+
+ $ openstack keypair create demo > id_rsa_demo
+ $ chmod 600 id_rsa_demo
+
+3. Choose a flavor.
+
+We need minimal resources for these test VMs, so the ``m1.nano`` flavor is
+sufficient.
+
+::
+
+ $ openstack flavor list
+ +----+-----------+-------+------+-----------+-------+-----------+
+ | ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public |
+ +----+-----------+-------+------+-----------+-------+-----------+
+ | 1 | m1.tiny | 512 | 1 | 0 | 1 | True |
+ | 2 | m1.small | 2048 | 20 | 0 | 1 | True |
+ | 3 | m1.medium | 4096 | 40 | 0 | 2 | True |
+ | 4 | m1.large | 8192 | 80 | 0 | 4 | True |
+ | 42 | m1.nano | 64 | 0 | 0 | 1 | True |
+ | 5 | m1.xlarge | 16384 | 160 | 0 | 8 | True |
+ | 84 | m1.micro | 128 | 0 | 0 | 1 | True |
+ | c1 | cirros256 | 256 | 0 | 0 | 1 | True |
+ | d1 | ds512M | 512 | 5 | 0 | 1 | True |
+ | d2 | ds1G | 1024 | 10 | 0 | 1 | True |
+ | d3 | ds2G | 2048 | 10 | 0 | 2 | True |
+ | d4 | ds4G | 4096 | 20 | 0 | 4 | True |
+ +----+-----------+-------+------+-----------+-------+-----------+
+
+ $ FLAVOR_ID=42
+
+4. Choose an image.
+
+DevStack imports the CirrOS image by default, which is perfect for our testing.
+It's a very small test image.
+
+::
+
+ $ openstack image list
+ +--------------------------------------+--------------------------+--------+
+ | ID | Name | Status |
+ +--------------------------------------+--------------------------+--------+
+ | 849a8db2-3754-4cf6-9271-491fa4ff7195 | cirros-0.3.5-x86_64-disk | active |
+ +--------------------------------------+--------------------------+--------+
+
+ $ IMAGE_ID=849a8db2-3754-4cf6-9271-491fa4ff7195
+
+5. Setup a security rule so that we can access the VMs we will boot up next.
+
+By default, DevStack does not allow users to access VMs, to enable that, we
+will need to add a rule. We will allow both ICMP and SSH.
+
+::
+
+ $ openstack security group rule create --ingress --ethertype IPv4 --dst-port 22 --protocol tcp default
+ $ openstack security group rule create --ingress --ethertype IPv4 --protocol ICMP default
+ $ openstack security group rule list
+ +--------------------------------------+-------------+-----------+------------+--------------------------------------+--------------------------------------+
+ | ID | IP Protocol | IP Range | Port Range | Remote Security Group | Security Group |
+ +--------------------------------------+-------------+-----------+------------+--------------------------------------+--------------------------------------+
+ ...
+ | ade97198-db44-429e-9b30-24693d86d9b1 | tcp | 0.0.0.0/0 | 22:22 | None | a47b14da-5607-404a-8de4-3a0f1ad3649c |
+ | d0861a98-f90e-4d1a-abfb-827b416bc2f6 | icmp | 0.0.0.0/0 | | None | a47b14da-5607-404a-8de4-3a0f1ad3649c |
+ ...
+ +--------------------------------------+-------------+-----------+------------+--------------------------------------+--------------------------------------+
+
+ $ neutron security-group-rule-create --direction ingress --ethertype IPv4 --port-range-min 22 --port-range-max 22 --protocol tcp default
+ $ neutron security-group-rule-create --direction ingress --ethertype IPv4 --protocol ICMP default
+ $ neutron security-group-rule-list
+ +--------------------------------------+----------------+-----------+-----------+---------------+-----------------+
+ | id | security_group | direction | ethertype | protocol/port | remote |
+ +--------------------------------------+----------------+-----------+-----------+---------------+-----------------+
+ | 8b2edbe6-790e-40ef-af54-c7b64ced8240 | default | ingress | IPv4 | 22/tcp | any |
+ | 5bee0179-807b-41d7-ab16-6de6ac051335 | default | ingress | IPv4 | icmp | any |
+ ...
+ +--------------------------------------+----------------+-----------+-----------+---------------+-----------------+
+
+6. Boot some VMs.
+
+Now we will boot two VMs. We'll name them ``test1`` and ``test2``.
+
+::
+
+ $ openstack server create --nic net-id=$PRIVATE_NET_ID --flavor $FLAVOR_ID --image $IMAGE_ID --key-name demo test1
+ +-----------------------------+-----------------------------------------------------------------+
+ | Field | Value |
+ +-----------------------------+-----------------------------------------------------------------+
+ | OS-DCF:diskConfig | MANUAL |
+ | OS-EXT-AZ:availability_zone | |
+ | OS-EXT-STS:power_state | NOSTATE |
+ | OS-EXT-STS:task_state | scheduling |
+ | OS-EXT-STS:vm_state | building |
+ | OS-SRV-USG:launched_at | None |
+ | OS-SRV-USG:terminated_at | None |
+ | accessIPv4 | |
+ | accessIPv6 | |
+ | addresses | |
+ | adminPass | BzAWWA6byGP6 |
+ | config_drive | |
+ | created | 2017-03-09T16:56:08Z |
+ | flavor | m1.nano (42) |
+ | hostId | |
+ | id | d8b8084e-58ff-44f4-b029-a57e7ef6ba61 |
+ | image | cirros-0.3.5-x86_64-disk (849a8db2-3754-4cf6-9271-491fa4ff7195) |
+ | key_name | demo |
+ | name | test1 |
+ | progress | 0 |
+ | project_id | b6522570f7344c06b1f24303abf3c479 |
+ | properties | |
+ | security_groups | name='default' |
+ | status | BUILD |
+ | updated | 2017-03-09T16:56:08Z |
+ | user_id | c68f77f1d85e43eb9e5176380a68ac1f |
+ | volumes_attached | |
+ +-----------------------------+-----------------------------------------------------------------+
+
+ $ openstack server create --nic net-id=$PRIVATE_NET_ID --flavor $FLAVOR_ID --image $IMAGE_ID --key-name demo test2
+ +-----------------------------+-----------------------------------------------------------------+
+ | Field | Value |
+ +-----------------------------+-----------------------------------------------------------------+
+ | OS-DCF:diskConfig | MANUAL |
+ | OS-EXT-AZ:availability_zone | |
+ | OS-EXT-STS:power_state | NOSTATE |
+ | OS-EXT-STS:task_state | scheduling |
+ | OS-EXT-STS:vm_state | building |
+ | OS-SRV-USG:launched_at | None |
+ | OS-SRV-USG:terminated_at | None |
+ | accessIPv4 | |
+ | accessIPv6 | |
+ | addresses | |
+ | adminPass | YB8dmt5v88JV |
+ | config_drive | |
+ | created | 2017-03-09T16:56:50Z |
+ | flavor | m1.nano (42) |
+ | hostId | |
+ | id | 170d4f37-9299-4a08-b48b-2b90fce8e09b |
+ | image | cirros-0.3.5-x86_64-disk (849a8db2-3754-4cf6-9271-491fa4ff7195) |
+ | key_name | demo |
+ | name | test2 |
+ | progress | 0 |
+ | project_id | b6522570f7344c06b1f24303abf3c479 |
+ | properties | |
+ | security_groups | name='default' |
+ | status | BUILD |
+ | updated | 2017-03-09T16:56:51Z |
+ | user_id | c68f77f1d85e43eb9e5176380a68ac1f |
+ | volumes_attached | |
+ +-----------------------------+-----------------------------------------------------------------+
+
+Once both VMs have been started, they will have a status of ``ACTIVE``::
+
+ $ openstack server list
+ +--------------------------------------+-------+--------+---------------------------------------------------------+--------------------------+
+ | ID | Name | Status | Networks | Image Name |
+ +--------------------------------------+-------+--------+---------------------------------------------------------+--------------------------+
+ | 170d4f37-9299-4a08-b48b-2b90fce8e09b | test2 | ACTIVE | private=fd5d:9d1b:457c:0:f816:3eff:fe24:49df, 10.0.0.3 | cirros-0.3.5-x86_64-disk |
+ | d8b8084e-58ff-44f4-b029-a57e7ef6ba61 | test1 | ACTIVE | private=fd5d:9d1b:457c:0:f816:3eff:fe3f:953d, 10.0.0.10 | cirros-0.3.5-x86_64-disk |
+ +--------------------------------------+-------+--------+---------------------------------------------------------+--------------------------+
+
+Our two VMs have addresses of ``10.0.0.3`` and ``10.0.0.10``. If we list
+Neutron ports, there are two new ports with these addresses associated
+with them::
+
+ $ openstack port list
+ +--------------------------------------+------+-------------------+-----------------------------------------------------------------------------------------------------+--------+
+ | ID | Name | MAC Address | Fixed IP Addresses | Status |
+ +--------------------------------------+------+-------------------+-----------------------------------------------------------------------------------------------------+--------+
+ ...
+ | 97c970b0-485d-47ec-868d-783c2f7acde3 | | fa:16:3e:3f:95:3d | ip_address='10.0.0.10', subnet_id='da34f952-3bfc-45bb-b062-d2d973c1a751' | ACTIVE |
+ | | | | ip_address='fd5d:9d1b:457c:0:f816:3eff:fe3f:953d', subnet_id='5ff81545-7939-4ae0-8365-1658d45fa85c' | |
+ | e003044d-334a-4de3-96d9-35b2d2280454 | | fa:16:3e:24:49:df | ip_address='10.0.0.3', subnet_id='da34f952-3bfc-45bb-b062-d2d973c1a751' | ACTIVE |
+ | | | | ip_address='fd5d:9d1b:457c:0:f816:3eff:fe24:49df', subnet_id='5ff81545-7939-4ae0-8365-1658d45fa85c' | |
+ ...
+ +--------------------------------------+------+-------------------+-----------------------------------------------------------------------------------------------------+--------+
+
+ $ TEST1_PORT_ID=97c970b0-485d-47ec-868d-783c2f7acde3
+ $ TEST2_PORT_ID=e003044d-334a-4de3-96d9-35b2d2280454
+
+Now we can look at OVN using ``ovn-nbctl`` to see the logical switch ports
+that were created for these two Neutron ports. The first part of the output
+is the OVN logical switch port UUID. The second part in parentheses is the
+logical switch port name. Neutron sets the logical switch port name equal to
+the Neutron port ID.
+
+::
+
+ $ ovn-nbctl lsp-list neutron-$PRIVATE_NET_ID
+ ...
+ fde1744b-e03b-46b7-b181-abddcbe60bf2 (97c970b0-485d-47ec-868d-783c2f7acde3)
+ 7ce284a8-a48a-42f5-bf84-b2bca62cd0fe (e003044d-334a-4de3-96d9-35b2d2280454)
+ ...
+
+
+These two ports correspond to the two VMs we created.
+
+VM Connectivity
+---------------
+
+We can connect to our VMs by associating a floating IP address from the public
+network.
+
+::
+
+ $ openstack floating ip create --port $TEST1_PORT_ID public
+ +---------------------+--------------------------------------+
+ | Field | Value |
+ +---------------------+--------------------------------------+
+ | created_at | 2017-03-09T18:58:12Z |
+ | description | |
+ | fixed_ip_address | 10.0.0.10 |
+ | floating_ip_address | 172.24.4.8 |
+ | floating_network_id | 7ec986dd-aae4-40b5-86cf-8668feeeab67 |
+ | id | 24ff0799-5a72-4a5b-abc0-58b301c9aee5 |
+ | name | None |
+ | port_id | 97c970b0-485d-47ec-868d-783c2f7acde3 |
+ | project_id | b6522570f7344c06b1f24303abf3c479 |
+ | revision_number | 1 |
+ | router_id | ee51adeb-0dd8-4da0-ab6f-7ce60e00e7b0 |
+ | status | DOWN |
+ | updated_at | 2017-03-09T18:58:12Z |
+ +---------------------+--------------------------------------+
+
+Devstack does not wire up the public network by default so we must do
+that before connecting to this floating IP address.
+
+::
+
+ $ sudo ip link set br-ex up
+ $ sudo ip route add 172.24.4.0/24 dev br-ex
+ $ sudo ip addr add 172.24.4.1/24 dev br-ex
+
+Now you should be able to connect to the VM via its floating IP address.
+First, ping the address.
+
+::
+
+ $ ping -c 1 172.24.4.8
+ PING 172.24.4.8 (172.24.4.8) 56(84) bytes of data.
+ 64 bytes from 172.24.4.8: icmp_seq=1 ttl=63 time=0.823 ms
+
+ --- 172.24.4.8 ping statistics ---
+ 1 packets transmitted, 1 received, 0% packet loss, time 0ms
+ rtt min/avg/max/mdev = 0.823/0.823/0.823/0.000 ms
+
+Now SSH to the VM::
+
+ $ ssh -i id_rsa_demo cirros@172.24.4.8 hostname
+ test1
+
+Adding Another Compute Node
+---------------------------
+
+After completing the earlier instructions for setting up devstack, you can use
+a second VM to emulate an additional compute node. This is important for OVN
+testing as it exercises the tunnels created by OVN between the hypervisors.
+
+Just as before, create a throwaway VM but make sure that this VM has a
+different host name. Having same host name for both VMs will confuse Nova and
+will not produce two hypervisors when you query nova hypervisor list later.
+Once the VM is setup, create the ``stack`` user::
+
+ $ git clone https://git.openstack.org/openstack-dev/devstack.git
+ $ sudo ./devstack/tools/create-stack-user.sh
+
+Switch to the ``stack`` user and clone DevStack and networking-ovn::
+
+ $ sudo su - stack
+ $ git clone https://git.openstack.org/openstack-dev/devstack.git
+ $ git clone https://git.openstack.org/openstack/networking-ovn.git
+
+networking-ovn comes with another sample configuration file that can be used
+for this::
+
+ $ cd devstack
+ $ cp ../networking-ovn/devstack/computenode-local.conf.sample local.conf
+
+You must set SERVICE_HOST in local.conf. The value should be the IP address of
+the main DevStack host. You must also set HOST_IP to the IP address of this
+new host. See the text in the sample configuration file for more
+information. Once that is complete, run DevStack::
+
+ $ cd devstack
+ $ ./stack.sh
+
+This should complete in less time than before, as it's only running a single
+OpenStack service (nova-compute) along with OVN (ovn-controller, ovs-vswitchd,
+ovsdb-server). The final output will look something like this::
+
+
+ This is your host IP address: 172.16.189.30
+ This is your host IPv6 address: ::1
+ 2017-03-09 18:39:27.058 | stack.sh completed in 1149 seconds.
+
+Now go back to your main DevStack host. You can use admin credentials to
+verify that the additional hypervisor has been added to the deployment::
+
+ $ cd devstack
+ $ . openrc admin
+
+ $ openstack hypervisor list
+ +----+------------------------+-----------------+---------------+-------+
+ | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State |
+ +----+------------------------+-----------------+---------------+-------+
+ | 1 | centos7-ovn-devstack | QEMU | 172.16.189.6 | up |
+ | 2 | centos7-ovn-devstack-2 | QEMU | 172.16.189.30 | up |
+ +----+------------------------+-----------------+---------------+-------+
+
+You can also look at OVN and OVS to see that the second host has shown up. For
+example, there will be a second entry in the Chassis table of the
+OVN_Southbound database. You can use the ``ovn-sbctl`` utility to list
+chassis, their configuration, and the ports bound to each of them::
+
+ $ ovn-sbctl show
+
+ Chassis "ddc8991a-d838-4758-8d15-71032da9d062"
+ hostname: "centos7-ovn-devstack"
+ Encap vxlan
+ ip: "172.16.189.6"
+ options: {csum="true"}
+ Encap geneve
+ ip: "172.16.189.6"
+ options: {csum="true"}
+ Port_Binding "97c970b0-485d-47ec-868d-783c2f7acde3"
+ Port_Binding "e003044d-334a-4de3-96d9-35b2d2280454"
+ Port_Binding "cr-lrp-08d1f28d-cc39-4397-b12b-7124080899a1"
+ Chassis "b194d07e-0733-4405-b795-63b172b722fd"
+ hostname: "centos7-ovn-devstack-2.os1.phx2.redhat.com"
+ Encap geneve
+ ip: "172.16.189.30"
+ options: {csum="true"}
+ Encap vxlan
+ ip: "172.16.189.30"
+ options: {csum="true"}
+
+You can also see a tunnel created to the other compute node::
+
+ $ ovs-vsctl show
+ ...
+ Bridge br-int
+ fail_mode: secure
+ ...
+ Port "ovn-b194d0-0"
+ Interface "ovn-b194d0-0"
+ type: geneve
+ options: {csum="true", key=flow, remote_ip="172.16.189.30"}
+ ...
+ ...
+
+Provider Networks
+-----------------
+
+Neutron has a "provider networks" API extension that lets you specify
+some additional attributes on a network. These attributes let you
+map a Neutron network to a physical network in your environment.
+The OVN ML2 driver is adding support for this API extension. It currently
+supports "flat" and "vlan" networks.
+
+Here is how you can test it:
+
+First you must create an OVS bridge that provides connectivity to the
+provider network on every host running ovn-controller. For trivial
+testing this could just be a dummy bridge. In a real environment, you
+would want to add a local network interface to the bridge, as well.
+
+::
+
+ $ ovs-vsctl add-br br-provider
+
+ovn-controller on each host must be configured with a mapping between
+a network name and the bridge that provides connectivity to that network.
+In this case we'll create a mapping from the network name "providernet"
+to the bridge 'br-provider".
+
+::
+
+ $ ovs-vsctl set open . \
+ external-ids:ovn-bridge-mappings=providernet:br-provider
+
+Now create a Neutron provider network.
+
+::
+
+ $ neutron net-create provider --shared \
+ --provider:physical_network providernet \
+ --provider:network_type flat
+
+Alternatively, you can define connectivity to a VLAN instead of a flat network:
+
+::
+
+ $ neutron net-create provider-101 --shared \
+ --provider:physical_network providernet \
+ --provider:network_type vlan \
+ --provider:segmentation_id 101
+
+Observe that the OVN ML2 driver created a special logical switch port of type
+localnet on the logical switch to model the connection to the physical network.
+
+::
+
+ $ ovn-nbctl show
+ ...
+ switch 5bbccbbd-f5ca-411b-bad9-01095d6f1316 (neutron-729dbbee-db84-4a3d-afc3-82c0b3701074)
+ port provnet-729dbbee-db84-4a3d-afc3-82c0b3701074
+ addresses: ["unknown"]
+ ...
+
+ $ ovn-nbctl lsp-get-type provnet-729dbbee-db84-4a3d-afc3-82c0b3701074
+ localnet
+
+ $ ovn-nbctl lsp-get-options provnet-729dbbee-db84-4a3d-afc3-82c0b3701074
+ network_name=providernet
+
+If VLAN is used, there will be a VLAN tag shown on the localnet port as well.
+
+Finally, create a Neutron port on the provider network.
+
+::
+
+ $ neutron port-create provider
+
+or if you followed the VLAN example, it would be:
+
+::
+
+ $ neutron port-create provider-101
+
+Run Unit Tests
+--------------
+Run the unit tests in the local environment with ``tox``.
+
+::
+
+ $ tox -e py27
+ $ tox -e py27 networking_ovn.tests.unit.test_ovn_db_sync
+ $ tox -e py27 networking_ovn.tests.unit.test_ovn_db_sync.TestOvnSbSyncML2
+ $ tox -e py27 networking_ovn.tests.unit.test_ovn_db_sync.TestOvnSbSyncML2
+ .test_ovn_sb_sync
+
+Run Functional Tests
+--------------------
+you can run the functional tests with ``tox`` in your devstack environment:
+
+::
+
+ $ cd networking_ovn/tests/functional
+ $ tox -e dsvm-functional
+ $ tox -e dsvm-functional networking_ovn.tests.functional.test_mech_driver\
+ .TestPortBinding.test_port_binding_create_port
+
+If you want to run functional tests in your local clean environment, you may
+need a new working directory.
+
+::
+
+ $ export BASE=/opt/stack
+ $ mkdir -p /opt/stack/new
+ $ cd /opt/stack/new
+
+Next, get networking_ovn, neutron and devstack.
+
+::
+
+ $ git clone https://git.openstack.org/openstack/networking-ovn.git
+ $ git clone https://git.openstack.org/openstack/neutron.git
+ $ git clone https://git.openstack.org/openstack-dev/devstack.git
+
+Then execute the script to prepare the environment.
+
+::
+
+ $ cd networking-ovn/
+ $ ./networking_ovn/tests/contrib/gate_hook.sh
+
+Finally, run the functional tests with ``tox``
+
+::
+
+ $ cd networking_ovn/tests/functional
+ $ tox -e dsvm-functional
+ $ tox -e dsvm-functional networking_ovn.tests.functional.test_mech_driver\
+ .TestPortBinding.test_port_binding_create_port
+