1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
|
.. OPNFV - Open Platform for Network Function Virtualization
.. This work is licensed under a Creative Commons Attribution 4.0 International License.
.. http://creativecommons.org/licenses/by/4.0
Scenario: "OpenStack - OpenDaylight (Layer 2) - FD.io"
======================================================
Scenario: apex-os-odl_l2-fdio-ha
"apex-os-odl_l2-fdio-ha" is a scenario developed as part of the
FastDataStacks OPNFV project. The main components of the
"apex-os-odl_l2-fdio-ha" scenario are:
- APEX (TripleO) installer (please also see APEX installer documentation)
- Openstack (in HA configuration)
- OpenDaylight controller in clustered mode controlling layer 2 networking
- FD.io/VPP virtual forwarder for tenant networking
Introduction
============
NFV and virtualized high performance applications, such as video processing,
require a "fast data stack" solution that provides both carrier grade
forwarding performance, scalability and open extensibility, along with
functionality for realizing application policies and controlling a complex
network topology.
A solution stack is only as good as its foundation. Key foundational assets for
NFV infrastructure are
* The virtual forwarder: The virtual forwarder needs to be a feature rich,
high performance, highly scale virtual switch-router. It needs to leverage
hardware accelerators when available and run in user space.
In addition, it should be modular and easily extensible.
* Forwarder diversity: A solution stack should support a variety of
forwarders, hardware forwarders (physical switches and routers)
as well as software forwarders. This way virtual and physical
forwarding domains can be seamlessly glued together.
* Policy driven connectivity: Connectivity should respect and
reflect different business
In order to meet the desired qualities of an NFV infrastructure, the
following components were chosen for the "Openstack - OpenDaylight - FD.io"
scenario:
* FD.io Vector Packet Processor (VPP) - a highly scalable,
high performance, extensible virtual forwarder
* OpenDaylight Controller - an extensible controller platform which
offers the ability to separate business logic from networking
constructs, supports a diverse set of network devices
(virtual and physical) via the "group based policy (GBP)"
component, and can be clustered to achieve a highly available
deployment - as done in this scenario.
The "Openstack - OpenDaylight - FD.io" scenario provides the capability to
realize a set of use-cases relevant to the deployment of NFV nodes instantiated
by means of an Openstack orchestration system on FD.io/VPP enabled compute
nodes. The role of the Opendaylight network controller in this integration is
twofold. It provides a network device configuration and topology abstraction
via the Openstack Neutron interface, while providing the capability to realize
more complex network policies by means of Group Based Policies. Furthermore it
also provides the capabilities to monitor as well as visualize the operation of
the virtual network devices and their topologies.
In supporting the general use-case of instantiatiting an NFV instance, two
specific types of network transport use cases are realized:
* NFV instances with VPP data-plane forwarding using a VLAN provider network
* NFV instances with VPP data-plane forwarding using a VXLAN overlay
transport network
A deployment of the "apex-os-odl_l2-fdio-ha" scenario consists of 4 or more
servers:
* 1 Jumphost hosting the APEX installer - running the Undercloud
* 3 Controlhosts, which run the Overcloud as well as OpenDaylight
as a network controller. OpenDaylight is deployed in clustered
mode and runs on all 3 control nodes.
* 2 or more Computehosts
.. image:: FDS-odl_l2-ha-overview.png
Tenant networking leverages FD.io/VPP. Open VSwitch (OVS) is used for all other
connectivity, in particular the connectivity to public networking / the
Internet (i.e. br-ext) is performed via OVS as in any standard OpenStack
deployment. The OpenDaylight network controller is used to setup and manage
layer 2 networking for the scenario. Tenant networking can either leverage
VXLAN (in which case a full mesh of VXLAN tunnels is created) or VLANs. Layer 3
connectivity for a tenant network is provided centrally via qrouter on the
control node. As in a standard OpenStack deployment, the Layer3 agent
configures the qrouter and associated rulesets for security (security groups)
and NAT (floating IPs). Public IP network connectivity for a tenant network is
provided by interconnecting the VPP-based bridge domain representing the tenant
network to qrouter using a tap interface. The setup is depicted below:
.. image:: FDS-L3-tenant-connectivity.png
With high availability factored in the setup looks like the following.
.. image:: os-odl_l2-fdio-ha-colorado2_1.png
Note that the picture only shows two Controllernodes for reasons of
simplicity. A HA deployment will always include 3 Controllernodes.
Features of the scenario
------------------------
Main features of the "apex-os-odl_l2-fdio-ha" scenario:
* Automated installation using the APEX installer
* Fast and scalable tenant networking using FD.io/VPP as forwarder
* Layer 2 networking using VLANs or VXLAN, managed and
controlled through OpenDaylight
* Layer 3 connectivitiy for tenant networks supplied centrally on
the Control node through standard OpenStack mechanisms.
All layer 3 features apply, including floating IPs (i.e. NAT)
and security groups.
* Manual and automatic (via DHCP) addressing on tenant networks
* OpenDaylight controller high availability (clustering)
* OpenStack high availability
Scenario components and composition
===================================
The apex-os-odl_l2-fdio-ha scenario combines components from three key open
source projects: OpenStack, OpenDaylight, and Fast Data (FD.io). The key
components that realize the apex-os-odl_l2-fdio-ha scenario and which differ
from a regular, OVS-based scenario, are the OpenStack ML2 OpenDaylight plugin,
OpenDaylight Neutron Northbound, OpenDaylight Group Based Policy, OpenDaylight
Virtual Bridge Domain Manager, FD.io Honeycomb management agent and FD.io
Vector Packet Processor (VPP).
Here's a more detailed list of the individual software components involved:
**Openstack Neutron ML2 OpenDaylight Plugin**: Handles Neutron data base
synchronization and interaction with the southbound controller using a REST
interface.
**ODL GBP Neutron Mapper**: Maps neutron elements like networks, subnets,
security groups, etc. to GBP entities: Creates policy and configuration for
tenants (endpoints, resolved policies, forwarding rules).
**ODL GBP Neutron VPP Mapper**: Maps Neutron ports to VPP endpoints in GBP.
**ODL GBP Location Manager**: Provides real location for endpoints (i.e. Which
physical node an endpoint is connected to).
**GBP Renderer Manager**: Creates configuration for Renderers (like e.g.
VPP-Renderer or OVS-Renderer). The GBP Renderer Manager is the central point
for dispatching of data to specific device renderers. It uses the information
derived from the GBP end-point and its topology entries to dispatch the task
of configuration to a specific device renderer by writing a renderer policy
configuration into the registered renderer's policy store. The renderer
manager also monitors, by being a data change listener on the VPP Renderer
Policy States, for any errors in the application of a rendered configuration.
**GBP VPP Renderer Interface Manager**: Listens to VPP endpoints in the
Config DataStore and configures associated interfaces on VPP via HoneyComb.
**GBP VPP Renderer Renderer Policy Manager**: Manages the creation of
bridge domains using VBD and assigns interfaces to bridge domains.
**Virtual Bridge Domain Manager (VBD)**: Creates bridge domains (i.e. in case
of VXLAN creates full mesh of VXLAN tunnels, configures split horizon on
tunnel endpoints etc.). VDB configures VXLAN tunnels always into a full-mesh
with split-horizon group forwarding applied on any domain facing tunnel
interface (i.e. forwarding behavior will be that used for VPLS).
**Virtual Packet Processor (VPP) and Honeycomb server**: The VPP is the
accelerated data plane forwarding engine relying on vhost user interfaces
towards Virtual Machines created by the Nova Agent. The Honeycomb NETCONF
configuration server is responsible for driving the configuration of the VPP,
and collecting the operational data.
**Nova Agent**: The Nova Agent, a sub-component of the overall Openstack
architecture, is responsible for interacting with the compute node's host
Libvirt API to drive the life-cycle of Virtual Machines. It, along with the
compute node software, are assumed to be capable of supporting vhost user
interfaces.
The picture below shows the key components.
.. image:: FDS-basic-components.jpg
To provide a better understanding how the above mentioned components interact
with each other, the following diagram shows how the example of creating a
vhost-user port on VPP through Openstack Neutron:
To create or update a port, Neutron will send a request to ODL Neutron
Northbound which contains the UUID, along with the host-id as "vpp" and
vif-type as "vhost-user". The GBP Neutron mapper turns the "Neutron speak" of
"ports" into the generic connectivity model that GroupBasedPolicy uses.
Neutron "ports" become generic "GBP Endpoints" which can be consumed by the
GBP Renderer Manager. The GBP Renderer Manager resolves the policy for the
endpoint, i.e. it determines which communication relationships apply to the
specific endpoint, and hands the resolution to a device specific renderer,
which is the VPP renderer in the given case here. VPP renderer turns the
generic policy into VPP specific configuration. Note that in case the policy
would need to be applied to a different device, e.g. an OpenVSwitch (OVS),
then an "OVS Renderer" would be used. VPP Renderer and the topology manager
("Virtual Bridge Domain" manager - i.e. VBD) cooperate to create the actual
network configuration. VPP Renderer configures the interfaces to the virtual
machines (VM), i.e. the vhost-user interface in the given case here and
attaches them to a bridge domain on VPP. VBD handles the setup of connectivity
between bridge domains on individual VPPs, i.e. it maintains the VXLAN tunnels
in the given case here. Both VPP Renderer as well as VBD communicate with the
device through Netconf/YANG. All compute and control nodes run an instance of
VPP and the VPP-configuration agent "Honeycomb". Honeycomb serves as a
Netconf/YANG server, receives the configuration commands from VBD and VPP
Renderer and drives VPP configuration using VPP's local Java APIs.
.. image:: FDS-simple-callflow.png
Scenario Configuration
======================
To enable the "apex-os-odl_l2-fdio-ha" scenario check the appropriate
settings in the APEX configuration files. Those are typically found in
/etc/opnfv-apex.
File "deploy_settings.yaml" choose opendaylight as controller with version
"carbon" and enable vpp as forwarder. Also make sure that you set
"ha_enabled" to "true" in the global_params section. "ha_enabled" is the
only real difference from a configuration file perspective between the
scenario with high availability when compared to the ODL-L2 scenario
without high-availability support. "hugepages" need to set to a
sufficiently large value for VPP to work. The default value for VPP is
1024, but this only allows for a few VMs to be started. If feasible,
choose a significantly larger number on the compute nodes::
global_params:
ha_enabled: true
deploy_options:
sdn_controller: opendaylight
sdn_l3: false
odl_version: carbon
tacker: true
congress: true
sfc: false
vpn: false
vpp: true
dataplane: fdio
performance:
Controller:
kernel:
hugepages: 1024
hugepagesz: 2M
intel_iommu: 'on'
iommu: pt
isolcpus: 1,2
vpp:
main-core: 1
corelist-workers: 2
uio-driver: uio_pci_generic
Compute:
kernel:
hugepagesz: 2M
hugepages: 2048
intel_iommu: 'on'
iommu: pt
isolcpus: 1,2
vpp:
main-core: 1
corelist-workers: 2
uio-driver: uio_pci_generic
Validated deployment environments
=================================
The "os-odl_l2-fdio-ha" scenario has been deployed and tested
on the following sets of hardware:
* Linux Foundation lab (Chassis: Cisco UCS-B-5108 blade server,
NICs: 8 external / 32 internal 10GE ports,
RAM: 32G (4 x 8GB DDR4-2133-MHz RDIMM/PC4-17000/single rank/x4/1.2v),
CPU: 3.50 GHz E5-2637 v3/135W 4C/15MB Cache/DDR4 2133MHz
Disk: 1.2 TB 6G SAS 10K rpm SFF HDD) see also:
https://wiki.opnfv.org/display/pharos/Lflab+Hosting
* OPNFV CENGN lab (https://wiki.opnfv.org/display/pharos/CENGN+Pharos+Lab)
* Cisco internal development labs (UCS-B and UCS-C)
Limitations, Issues and Workarounds
===================================
For specific information on limitations and issues, please refer to the
FastDataStacks (FDS) release notes.
Note that this high availability scenario
deploys OpenStack in HA mode *and* OpenDaylight in cluster mode.
References
==========
* FastDataStacks OPNFV project wiki: https://wiki.opnfv.org/display/fds
* Fast Data (FD.io): https://fd.io/
* FD.io Vector Packet Processor (VPP): https://wiki.fd.io/view/VPP
* OpenDaylight Controller: https://www.opendaylight.org/
* OPNFV Danube release - more information: http://www.opnfv.org/danube
|