From 1acc55510e09dd6d877d5930461c744aeeef9753 Mon Sep 17 00:00:00 2001 From: Georg Kunz Date: Mon, 18 Jul 2016 15:56:35 +0200 Subject: Multi-back-ends: Added gap analysis Integrating Bin's text into the overall document by adding a gap analysis and a conclusion. Change-Id: I958f3206cc6176520ef635271208e35c7f5c6306 Signed-off-by: Georg Kunz --- docs/requirements/use_cases/multiple_backends.rst | 111 ++++++++++++++++++++-- 1 file changed, 103 insertions(+), 8 deletions(-) (limited to 'docs/requirements/use_cases') diff --git a/docs/requirements/use_cases/multiple_backends.rst b/docs/requirements/use_cases/multiple_backends.rst index 0d4ab13..62ed42a 100644 --- a/docs/requirements/use_cases/multiple_backends.rst +++ b/docs/requirements/use_cases/multiple_backends.rst @@ -2,36 +2,131 @@ .. http://creativecommons.org/licenses/by/4.0 .. (c) Bin Hu + +Multiple Networking Backends +---------------------------- + +Description +^^^^^^^^^^^ + Network Function Virtualization (NFV) brings the need of supporting multiple networking -back-ends in virtualized infrastructure environment. +back-ends in virtualized infrastructure environments. First of all, a Service Providers' virtualized network infrastructure will consist of -multiple SDN Controllers from different vendors for obvious business reason. +multiple SDN Controllers from different vendors for obvious business reasons. Those SDN Controllers may be managed within one cloud or multiple clouds. Jointly, those VIMs (e.g. OpenStack instances) and SDN Controllers need to work together in an interoperable framework to create NFV services in the Service Providers' virtualized network infrastructure. It is needed that one VIM (e.g. OpenStack -instance) shall be able to support multiple SDN Controllers at back-end. +instance) shall be able to support multiple SDN Controllers as back-end. Secondly, a Service Providers' virtualized network infrastructure will serve multiple, heterogeneous administrative domains, such as mobility domain, access networks, edge domain, core networks, WAN, enterprise domain, etc. The architecture of virtualized network infrastructure needs different types of SDN Controllers that are specialized and targeted for specific features and requirements of those different domains. -The architectural design may also include global and local SDN Controllers. And multiple -local SDN Controllers may be managed by one VIM (e.g. OpenStack instance). +The architectural design may also include global and local SDN Controllers. +Importantly, multiple local SDN Controllers may be managed by one VIM (e.g. +OpenStack instance). Furthermore, even within one administrative domain, NFV services could also be quite diversified. -Specialized NFV service needs specialized and dedicated SDN Controller too. Thus a Service +Specialized NFV services require specialized and dedicated SDN Controllers. Thus a Service Provider needs to use multiple APIs and back-ends simultaneously in order to provide users with diversified services at the same time. At the same time, for a particular NFV service, the new networking APIs need to be agnostic of the back-ends. -Therefore, it is expected that in NFV networking service domain: + + +Requirements +^^^^^^^^^^^^ + +Based on the use cases described above, we derive the following +requirements. + +It is expected that in NFV networking service domain: * One OpenStack instance shall support multiple APIs and SDN Controllers simultaneously +* New NFV Networking APIs shall be agnostic of back-ends + * Interoperability is needed among multi-vendor SDN Controllers at back-end -* New NFV Networking APIs shall be agnostic of back-ends + +Current Implementation +^^^^^^^^^^^^^^^^^^^^^^ + +In the current implementation of OpenStack networking, SDN controllers are +hooked up to Neutron by means of dedicated plugins. A plugin translates +requests coming in through the Neutron northbound API, e.g. the creation of a +new network, into the appropriate northbound API calls of the corresponding SDN +controller. + +There are multiple different plugin mechanisms currently available in Neutron, +each targeting a different purpose. In general, there are `core plugins`, +covering basic networking functionality and `service plugins`, providing layer 3 +connectivity and advanced networking services such as FWaaS or LBaaS. + + + +Core and ML2 Plugins +'''''''''''''''''''' + +The Neutron core plugins cover basic Neutron functionality, such as creating +networks and ports. Every core plugin implements the functionality needed to +cover the full range of the Neutron core API. A special instance of a core +plugin is the ML2 core plugin, which in turn allows for using sub-drivers - +separated again into type drivers (VLAN, VxLAN, GRE) or mechanism drivers (OVS, +OpenDaylight, etc.). This allows to using dedicated sub-drivers for dedicated +functionality. + +In practice, different SDN controllers use both plugin mechanisms to integrate +with Neutron. For instance OpenDaylight uses a ML2 mechanism plugin driver +whereas OpenContrail integrated by means of a full core plugin. + +In its current implementation, only one Neutron core plugin can be active at any +given time. This means that if a SDN controller utilizes a dedicated core +plugin, no other SDN controller can be used at the same time for the same type +of service. + +In contrast, the ML2 plugin allows for using multiple mechanism drivers +simultaneously. In principle, this enables a parallel deployment of multiple SDN +controllers if and only if all SDN controllers integrate through a ML2 mechanism +driver. + + + +Neutron Service Plugins +''''''''''''''''''''''' + +Neutron service plugins target L3 services and advanced networking services, +such as BGPVPN or LBaaS. Typically, a service itself provides a driver plugin +mechanism which needs to be implemented for every SDN controller. As the +architecture of the driver mechanism is up to the community developing the +service plugin, it needs to be analyzed for every driver plugin mechanism +individually if and how multiple back-ends are supported. + + + +Gaps in the current solution +^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Given the use case description and the current implementation of OpenStack +Neutron, we identify the following gaps: + + +[MB-GAP1] Limited support for multiple back-ends +''''''''''''''''''''''''''''''''''''''''''''''' + +As pointed out above, the Neutron core plugin mechanism only allows for one +active plugin at a time. The ML2 plugin allows for running multiple mechanism +drivers in parallel, however, successful inter-working strongly depends on the +individual driver. + + + +Conclusion +^^^^^^^^^^ + +We conclude that a clean method of integrating multiple SDN controllers into a +single OpenStack deployment is needed to fulfill the needs of operators. -- cgit 1.2.3-korg