summaryrefslogtreecommitdiffstats
path: root/docs/scenarios
diff options
context:
space:
mode:
authorjuraj.linkes <jlinkes@cisco.com>2017-09-19 16:30:43 +0200
committerJuraj Linkeš <jlinkes@cisco.com>2017-09-29 08:17:19 +0000
commit199c7ea22eafc71c10b77c44c19e965b0f874a2b (patch)
tree958cf175b885d8d4b1545f9ddc7289c9dab80092 /docs/scenarios
parent32a97a1bab7b0ba8e1d00704030bb6925fc74293 (diff)
DVR documentation updates
Added detailed description Updated picture describing basic scenario Updated deployment instructions Change-Id: Ie89ebc9b489a1317c6d1f4d2f78a5fc38f776e39 Signed-off-by: juraj.linkes <jlinkes@cisco.com> (cherry picked from commit 0f75e4cc82c5c1c7b5f5b2e675fc4ef7066f4635)
Diffstat (limited to 'docs/scenarios')
-rwxr-xr-xdocs/scenarios/os-odl-fdio-dvr-noha/FDS-L3-DVR-example.pngbin299709 -> 0 bytes
-rwxr-xr-xdocs/scenarios/os-odl-fdio-dvr-noha/FDS-L3-DVR-sample-setup.pngbin0 -> 126489 bytes
-rwxr-xr-xdocs/scenarios/os-odl-fdio-dvr-noha/FDS-L3-noha-sample-setup.pngbin168854 -> 0 bytes
-rwxr-xr-xdocs/scenarios/os-odl-fdio-dvr-noha/scenario.description.rst198
4 files changed, 114 insertions, 84 deletions
diff --git a/docs/scenarios/os-odl-fdio-dvr-noha/FDS-L3-DVR-example.png b/docs/scenarios/os-odl-fdio-dvr-noha/FDS-L3-DVR-example.png
deleted file mode 100755
index 18932c3..0000000
--- a/docs/scenarios/os-odl-fdio-dvr-noha/FDS-L3-DVR-example.png
+++ /dev/null
Binary files differ
diff --git a/docs/scenarios/os-odl-fdio-dvr-noha/FDS-L3-DVR-sample-setup.png b/docs/scenarios/os-odl-fdio-dvr-noha/FDS-L3-DVR-sample-setup.png
new file mode 100755
index 0000000..77d6815
--- /dev/null
+++ b/docs/scenarios/os-odl-fdio-dvr-noha/FDS-L3-DVR-sample-setup.png
Binary files differ
diff --git a/docs/scenarios/os-odl-fdio-dvr-noha/FDS-L3-noha-sample-setup.png b/docs/scenarios/os-odl-fdio-dvr-noha/FDS-L3-noha-sample-setup.png
deleted file mode 100755
index 27c8335..0000000
--- a/docs/scenarios/os-odl-fdio-dvr-noha/FDS-L3-noha-sample-setup.png
+++ /dev/null
Binary files differ
diff --git a/docs/scenarios/os-odl-fdio-dvr-noha/scenario.description.rst b/docs/scenarios/os-odl-fdio-dvr-noha/scenario.description.rst
index 4f09069..b611845 100755
--- a/docs/scenarios/os-odl-fdio-dvr-noha/scenario.description.rst
+++ b/docs/scenarios/os-odl-fdio-dvr-noha/scenario.description.rst
@@ -13,9 +13,10 @@ FastDataStacks OPNFV project. The main components of the
- APEX (TripleO) installer (please also see APEX installer documentation)
- Openstack (in non-HA configuration)
- - OpenDaylight controller (non-clustered)
- controlling layer 2 and layer 3 networking
- - FD.io/VPP virtual forwarder for tenant networking
+ - OpenDaylight controller (non-clustered) controlling networking
+ - FD.io/VPP virtual forwarder for tenant and public networking; the virtual
+ forwarder serves as layer 3 forwarder on each compute node, providing high
+ availability of layer 3 services
Introduction
============
@@ -38,36 +39,38 @@ NFV infrastructure are
as well as software forwarders. This way virtual and physical
forwarding domains can be seamlessly glued together.
* Policy driven connectivity: Connectivity should respect and
- reflect different business
+ reflect different business policies
In order to meet the desired qualities of an NFV infrastructure, the
following components were chosen for the "Openstack - OpenDaylight - FD.io"
scenario:
- * FD.io Vector Packet Processor (VPP) - a highly scalable,
- high performance, extensible virtual forwarder
+ * FD.io Vector Packet Processor (VPP) - a highly scalable, high performance,
+ extensible virtual forwarder providing fully distributed routing on each
+ compute node
* OpenDaylight Controller - an extensible controller platform which
offers the ability to separate business logic from networking
constructs, supports a diverse set of network devices
- (virtual and physical) via the "group based policy (GBP)"
+ (virtual and physical) via the "Group-Based Policy (GBP)"
component, and can be clustered to achieve a highly available
deployment.
The "Openstack - OpenDaylight - FD.io DVR" scenario provides the capability to
realize a set of use-cases relevant to the deployment of NFV nodes instantiated
-by means of an Openstack orchestration system on FD.io/VPP enabled compute
-nodes. The role of the Opendaylight network controller in this integration is
-twofold. It provides a network device configuration and topology abstraction
-via the Openstack Neutron interface, while providing the capability to realize
-more complex network policies by means of Group Based Policies. Furthermore it
-also provides the capabilities to monitor as well as visualize the operation of
-the virtual network devices and their topologies. In supporting the general
-use-case of instantiatiting an NFV instance, two specific types of network
-transport use cases are realized:
-
- * NFV instances with VPP data-plane forwarding using a VLAN provider network
- * NFV instances with VPP data-plane forwarding using a VXLAN overlay
- transport network
+by means of an Openstack orchestration system on FD.io/VPP enabled controller
+and compute nodes, with computes nodes being the hosts for distributed virtual
+routing. Distributed virtual routing enables all forwarding operations to be
+available locally on each compute node, removing scaling issues and performance
+bottlenecks. In addition, the forwarding setup is highly available since
+a failure on a given compute node is local to that node and doesn't affect
+routing anywhere else. The role of the Opendaylight network controller in this
+integration is twofold. It provides a network device configuration and topology
+abstraction via the Openstack Neutron interface, while providing the capability
+to realize more complex network policies by means of Group Based Policies.
+Furthermore it also provides the capabilities to monitor as well as visualize
+the operation of the virtual network devices and their topologies.
+In supporting the general use-case of instantiatiting an NFV instance,
+VXLAN GPE overlay encapsulation transport network is used.
A deployment of the "apex-os-odl-fdio-dvr-noha" scenario consists of 4 or more
servers:
@@ -75,8 +78,8 @@ servers:
* 1 Jumphost hosting the APEX installer - running the Undercloud
* 1 Controlhost, which runs the Overcloud as well as
OpenDaylight as a network controller
- * 2 or more Computehosts. These Computehosts also serve as
- layer 3 gateways for tenant networks.
+ * 2 or more Computehosts. These Computehosts also serve as layer 3 gateways
+ for tenant networks and provide ditributed virtual routing
TODO: update the image:
1. Compute 0..N are gateways
@@ -90,55 +93,52 @@ networking interface. This means that VPP is used for communication within
a tenant network, between tenant networks, as well as between a tenant network
and the Internet.
-Note that this setup slightly differs from the usual
-centralized L3 setup with qrouter on the control node. This setup was chosen
-to limit the configuration changes for the introduction of FD.io/VPP. The
-OpenDaylight network controller is used to setup and manage layer 2 and
-layer 3 networking for the scenario - with Group Based Policy (GBP) being the
-key component. Tenant networking can either leverage VXLAN (in which case a
-full mesh of VXLAN tunnels is created) or VLANs.
-
-The picture below shows an example setup with two compute and one control
-node. Note that the external network is connected via compute node 0 through
-VPP. VPP provides all layer 3 services which are provided in a "vanilla"
-OpenStack deployment, including SNAT and DNAT, as well as north-south
-and east-west traffic filtering for security purposes ("security groups").
+Note that this setup differs from the usual centralized layer 3 setup with
+qrouter on a controller node. There is no layer 2 networking. The OpenDaylight
+network controller is used to setup and manage layer 3 networking for the
+scenario, with Group Based Policy (GBP) and Locator/Identifier Separation
+Protocol (LISP) Flow Mapping Service being the key components. Tenant
+networking leverages VXLAN GPE encapsulation, where LISP Flow Mapping Service
+and the LISP protocol in VPP create tunnels between nodes where it's required,
+providing dynamic, fail-safe connectivity between nodes and obviating the need
+for full mesh maintanance and monitoring.
+
+The picture below shows an example setup with two compute and one controller
+nodes. Note that the external network is connected via each compute node
+through VPP, providing full distributed routing. VPP provides almost all
+layer 3 services which are provided in a "vanilla" OpenStack deployment,
+including one-to-one NAT but not source NAT, as well as north-south and
+east-west traffic filtering for security purposes ("security groups").
TODO: update the image:
1. Add External network interface to Computenode-1
-.. image:: FDS-L3-noha-sample-setup.png
+.. image:: FDS-L3-DVR-sample-setup.png
Features of the scenario
-------------------------
+========================
Main features of the "apex-os-odl-fdio-dvr-noha" scenario:
* Automated installation using the APEX installer
* Fast and scalable tenant networking using FD.io/VPP as forwarder
- * Layer 2 networking using VLANs or VXLAN, managed
- and controlled through OpenDaylight
+ * Layer 3 tenant networking using VXLAN GPE, managed
+ and controlled through OpenDaylight and LISP protocol in FD.io/VPP
* Layer 3 connectivitiy for tenant networks supplied
through FD.io/VPP. Layer 3 features, including security groups as well as
floating IP addresses (i.e. NAT) are implemented by the FD.io/VPP forwarder
- * Manual and automatic (via DHCP) addressing on tenant networks
+ * Manual and automatic (via DHCP relaying) addressing on tenant networks
Scenario components and composition
===================================
-TODO: add LISP to components
-
The apex-os-odl-fdio-dvr-noha scenario combines components from three key open
source projects: OpenStack, OpenDaylight, and Fast Data (FD.io). The key
components that realize the apex-os-odl-fdio-dvr-noha scenario and which differ
-from a regular, OVS-based scenario, are the OpenStack ML2 OpenDaylight plugin,
+from a regular OVS-based scenario are the OpenStack ML2 OpenDaylight plugin,
OpenDaylight Neutron Northbound, OpenDaylight Group Based Policy, OpenDaylight
-Virtual Bridge Domain Manager, FD.io Honeycomb management agent and FD.io
-Vector Packet Processor (VPP).
-
-Note that the key components of the OpenDaylight based scenarios of
-FastDataStacks are the same. The centrallized scenario "apex-os-odl-fdio-noha"
-and the DVR scenario "apex-os-odl-fdio-dvr-noha" share the same components.
+Locator/Identifier Separation Protocol Flow Mapping Service,
+FD.io Honeycomb management agent and FD.io Vector Packet Processor (VPP).
Here's a more detailed list of the individual software components involved:
@@ -167,20 +167,19 @@ Policy States, for any errors in the application of a rendered configuration.
**GBP VPP Renderer Interface Manager**: Listens to VPP endpoints in the
Config DataStore and configures associated interfaces on VPP via HoneyComb.
-**GBP VPP Renderer Renderer Policy Manager**: Manages the creation of
-bridge domains using VBD and assigns interfaces to bridge domains.
+**LISP Flow Mapping Service**: TODO description
-**Virtual Bridge Domain Manager (VBD)**: Creates bridge domains (i.e. in case
-of VXLAN creates full mesh of VXLAN tunnels, configures split horizon on
-tunnel endpoints etc.). VDB configures VXLAN tunnels always into a full-mesh
-with split-horizon group forwarding applied on any domain facing tunnel
-interface (i.e. forwarding behavior will be that used for VPLS).
+**LISP Plugin**: TODO description
-**Virtual Packet Processor (VPP) and Honeycomb server**: The VPP is the
+**Virtual Packet Processor (VPP)**: The VPP is the
accelerated data plane forwarding engine relying on vhost user interfaces
-towards Virtual Machines created by the Nova Agent. The Honeycomb NETCONF
-configuration server is responsible for driving the configuration of the VPP,
-and collecting the operational data.
+towards Virtual Machines created by the Nova Agent.
+
+**VPP LISP**: TODO description
+
+**Honeycomb Netconf server**:
+The Honeycomb NETCONF configuration server is responsible for driving
+the configuration of the VPP, and collecting operational data.
**Nova Agent**: The Nova Agent, a sub-component of the overall Openstack
architecture, is responsible for interacting with the compute node's host
@@ -195,13 +194,13 @@ TODO: update the image:
.. image:: FDS-basic-components.jpg
-To provide a better understanding how the above mentioned components interact
-with each other, the following diagram shows how the example of creating a
-vhost-user port on VPP through Openstack Neutron:
+Neutron Port Callflow
+=====================
-To create or update a port, Neutron will send a request to ODL Neutron
-Northbound which contains the UUID, along with the host-id as "vpp" and
-vif-type as "vhost-user". The GBP Neutron mapper turns the "Neutron speak" of
+When a port is created or updated, Neutron sends data to ODL Neutron Northbound
+which contain UUID, along with a host-id such as
+"overcloud-novacompute-0.opnfv.org" and vif-type as "vhost-user".
+The GBP Neutron mapper turns the "Neutron speak" of
"ports" into the generic connectivity model that GroupBasedPolicy uses.
Neutron "ports" become generic "GBP Endpoints" which can be consumed by the
GBP Renderer Manager. The GBP Renderer Manager resolves the policy for the
@@ -210,34 +209,49 @@ specific endpoint, and hands the resolution to a device specific renderer,
which is the VPP renderer in the given case here. VPP renderer turns the
generic policy into VPP specific configuration. Note that in case the policy
would need to be applied to a different device, e.g. an OpenVSwitch (OVS),
-then an "OVS Renderer" would be used. VPP Renderer and the topology manager
-("Virtual Bridge Domain" manager - i.e. VBD) cooperate to create the actual
+then an "OVS Renderer" would be used. VPP Renderer and LISP Flow Mapping
+Service (TODO expand) cooperate to create the actual
network configuration. VPP Renderer configures the interfaces to the virtual
-machines (VM), i.e. the vhost-user interface in the given case here and
-attaches them to a bridge domain on VPP. VBD handles the setup of connectivity
-between bridge domains on individual VPPs, i.e. it maintains the VXLAN tunnels
-in the given case here. Both VPP Renderer as well as VBD communicate with the
-device through Netconf/YANG. All compute and control nodes run an instance of
+machines (VM), i.e. the vhost-user interface in the given case here and LISP
+configured VXLAN tunnels (TODO expand) in the given case here.
+VPP Renderer communicated with the device using Netconf/Yang.
+All compute and controller nodes run an instance of
VPP and the VPP-configuration agent "Honeycomb". Honeycomb serves as a
Netconf/YANG server, receives the configuration commands from VBD and VPP
Renderer and drives VPP configuration using VPP's local Java APIs.
+To provide a better understanding how the above mentioned components interact
+with each other, the following diagram shows how the example of creating a
+vhost-user port on VPP through Openstack Neutron:
+
.. image:: FDS-simple-callflow.png
+DHCP Packet Flow
+================
+
+East-West Packet Flow
+=====================
+
+North-South Packet Flow
+=======================
+
TODO: add description (and possibly a picture) of how forwarding works -
describe how packets travel in the setup
NOTE: could be in some different place in the document
-Scenario Configuration
-======================
+Scenario Configuration and Deployment
+=====================================
+
+The Apex documentation contains information on how to properly setup your
+enviroment and how to modify the configuration files.
-To enable the "apex-os-odl-fdio-dvr-noha" scenario check the appropriate
-settings in the APEX configuration files. Those are typically found in
-/etc/opnfv-apex.
+To deploy the "apex-os-odl-fdio-dvr-noha" scenario, select the
+os-odl-fdio-dvr-noha.yaml as your deploy settings and use the
+network_settings_vpp.yaml file as template to create a network configuration
+file. Both of these are in /etc/opnfv-apex.
-File "deploy_settings.yaml": Choose Opendaylight as controller with version
-"oxygen" and enable vpp as forwarder. "odl_routing_node" chooses the dvr
-setup for l3 forwarding::
+The file os-odl-fdio-dvr-noha.yaml mentioned above contains this
+configuration::
deploy_options:
sdn_controller: opendaylight
@@ -273,18 +287,34 @@ setup for l3 forwarding::
corelist-workers: 2
uio-driver: uio_pci_generic
+The earliest usable ODL version is Oxygen. "odl_routing_node" with value dvr
+chooses the dvr setup and vpp: true and dataplane:fdio together enable vpp
+instead of ovs. The perfomance options are vpp specific. The default hugepages
+configuration leaves only 3.5GB for VMs (2M * 2048 - 512 for VPP), so if you
+wish to have more memory for VMs, either increase the number of hugepages
+(hugepages) or the size of each hugepage (hugepagesz) for computes.
+
+In order to create a VM in Openstack you need to use a flavor which uses
+hugepages. One way to configure such flavor is this::
+
+ openstack flavor create nfv --property hw:mem_page_size=large
+
Limitations, Issues and Workarounds
===================================
-For specific information on limitations and issues, please refer to the APEX
+Source NAT is not supported, meaning a VM without floating ip will not be able
+to reach networks outside of Opnestack Cloud (e.g. the Internet). Only
+one-to-one NAT is supported (i.e. floating ips).
+
+For other information on limitations and issues, please refer to the APEX
installer release notes.
References
==========
-
* FastDataStacks OPNFV project wiki: https://wiki.opnfv.org/display/fds
+ * Apex OPNFV project wiki: https://wiki.opnfv.org/display/apex
* Fast Data (FD.io): https://fd.io/
* FD.io Vector Packet Processor (VPP): https://wiki.fd.io/view/VPP
* OpenDaylight Controller: https://www.opendaylight.org/
- * OPNFV Danube release - more information: http://www.opnfv.org/danube
+ * OPNFV Euphrates release - more information: http://www.opnfv.org/euphrates