summaryrefslogtreecommitdiffstats
path: root/docs/scenarios
diff options
context:
space:
mode:
authorjuraj.linkes <jlinkes@cisco.com>2017-10-18 10:40:27 +0200
committerjuraj.linkes <jlinkes@cisco.com>2017-10-18 18:34:27 +0200
commit3e93206b58c52b1a6aea05ec4faae80c9715534d (patch)
tree97e6bba7da6832bf2e451172a32db6f4799e98df /docs/scenarios
parent13a7b3ceb0d1a2811cd07a7e1bf4763d2b00432d (diff)
Final documenatation updates for Euphrates
Updates release notes as well as nosdn scenarios description. Change-Id: Ie061f8b371a94df0baca2655c960a5fda133266b Signed-off-by: juraj.linkes <jlinkes@cisco.com>
Diffstat (limited to 'docs/scenarios')
-rwxr-xr-x[-rw-r--r--]docs/scenarios/os-nosdn-fdio-ha/scenario.description.rst62
-rw-r--r--docs/scenarios/os-nosdn-fdio-noha/index.rst2
-rwxr-xr-xdocs/scenarios/os-nosdn-fdio-noha/scenario.description.rst66
3 files changed, 50 insertions, 80 deletions
diff --git a/docs/scenarios/os-nosdn-fdio-ha/scenario.description.rst b/docs/scenarios/os-nosdn-fdio-ha/scenario.description.rst
index 86eadb2..ee4196e 100644..100755
--- a/docs/scenarios/os-nosdn-fdio-ha/scenario.description.rst
+++ b/docs/scenarios/os-nosdn-fdio-ha/scenario.description.rst
@@ -15,7 +15,8 @@ are:
- APEX (TripleO) installer (please also see APEX installer documentation)
- Openstack (in HA configuration)
- FD.io/VPP virtual forwarder for tenant networking
- - etcd, which is the VPP ML2 mechanism driver's distributed key-value store, in clustered mode
+ - networking-vpp (Neutron ML2 mechanism driver for FD.io/VPP)
+ - etcd (networking-vpp's distributed key-value store) in clustered mode
Introduction
============
@@ -25,7 +26,7 @@ require a "fast data stack" solution that provides both carrier grade
forwarding performance, scalability and open extensibility.
A key component of any NFV solution is the virtual forwarder, which needs to be
-a feature rich, high performance, highly scale virtual switch-router. It needs
+a feature rich, high performance, highly scalable virtual switch-router. It needs
to leverage hardware accelerators when available and run in user space. In
addition, it should be modular and easily extensible. The Vector Packet
Processor (VPP) supplied by the FD.io project meets these needs, in that
@@ -43,24 +44,26 @@ servers:
* 3 Controlhosts, which run the Overcloud and Openstack services as well as the VPP ML2 etcd cluster
* 2 or more Computehosts
-.. image:: FDS-nosdn-overview.png
Tenant networking leverages FD.io/VPP. Open VSwitch (OVS) is used for all other
connectivity, in particular the connectivity to public networking / the
Internet (i.e. br-ext) is performed via OVS as in any standard OpenStack
-deployment. A VPP management agent is used to setup and manage layer 2
-networking for the scenario. Neutron ML2 plugin is configured to use
-the VPP ML2 networking mechanism driver. Tenant networking can either leverage
-VLANs or plain interfaces (flat networks). Layer 3 connectivity for a tenant
-network is provided centrally via qrouter on the control node. As in a
-standard OpenStack deployment, the Layer3 agent configures the qrouter and
-associated rulesets for security (security groups) and NAT (floating IPs).
-Public IP network connectivity for a tenant network is provided by
-interconnecting the VPP-based bridge domain representing the tenant network to
-qrouter using a tap interface.
+deployment. Neutron ML2 plugin is configured to use networking-vpp, the ML2-VPP
+networking mechanism driver. Networking-vpp also provides the VPP management
+agent used to setup and manage layer 2 networking for the scenario. Tenant
+networking can either leverage VLANs or plain interfaces. Layer 3 connectivity
+for a tenant network is provided centrally via qrouter on the control node. As
+in a standard OpenStack deployment, the Layer3 agent configures the qrouter and
+associated rulesets for security (security groups) and NAT (floating IPs). Public
+IP network connectivity for a tenant network is provided by interconnecting the
+VPP-based bridge domain representing the tenant network to qrouter using a tap
+interface.
The setup is depicted below:
+
+.. image:: FDS-nosdn-overview.png
+
Features of the scenario
------------------------
@@ -86,7 +89,7 @@ The os-nosdn-fdio-ha scenario combines components from two key open
source projects: OpenStack and Fast Data (FD.io). In order to make Fast Data
(FD.io) networking available to this scenario, an ML2 mechanism driver and a
light-weight control plane agent for VPP forwarder has been created. For
-details see also https://git.openstack.org/cgit/openstack/networking-vpp/
+details see also https://github.com/openstack/networking-vpp.
Networking-vpp provides a Neutron ML2 mechanism driver to bring the advantages
of VPP to OpenStack deployments.It uses an etcd cluster on the control node to
@@ -110,33 +113,7 @@ Scenario Configuration
To enable the "os-nosdn-fdio-ha" scenario check the appropriate settings
in the APEX configuration files. Those are typically found in /etc/opnfv-apex.
-Use the file "os-nosdn-fdio-ha.yaml"::
-
- global_params:
- ha_enabled: true
-
- deploy_options:
- sdn_controller: false
- sdn_l3: false
- tacker: true
- congress: true
- sfc: false
- vpn: false
- vpp: true
- dataplane: fdio
- performance:
- Controller:
- vpp:
- uio-driver: uio_pci_generic
- Compute:
- kernel:
- hugepagesz: 2M
- hugepages: 2048
- intel_iommu: 'on'
- iommu: pt
- isolcpus: 1,2
- vpp:
- uio-driver: uio_pci_generic
+Use the file "os-nosdn-fdio-ha.yaml".
Validated deployment environments
@@ -166,8 +143,7 @@ References
* FastDataStacks OPNFV project wiki: https://wiki.opnfv.org/display/fds
* Fast Data (FD.io): https://fd.io/
* FD.io Vector Packet Processor (VPP): https://wiki.fd.io/view/VPP
- * ML2 VPP mechanisms driver: https://git.openstack.org/cgit/openstack/networking-vpp/
- * OPNFV Danube release - more information: http://www.opnfv.org/danube
+ * ML2 VPP mechanism driver: https://github.com/openstack/networking-vpp
* Networking-vpp launchpad (ticket tracker) https://launchpad.net/networking-vpp
* Networking-vpp Wiki: https://wiki.openstack.org/wiki/Networking-vpp/
* APEX (TripleO based) installer: https://wiki.opnfv.org/display/apex/Apex
diff --git a/docs/scenarios/os-nosdn-fdio-noha/index.rst b/docs/scenarios/os-nosdn-fdio-noha/index.rst
index daf71e3..8c21268 100644
--- a/docs/scenarios/os-nosdn-fdio-noha/index.rst
+++ b/docs/scenarios/os-nosdn-fdio-noha/index.rst
@@ -9,7 +9,7 @@
Fast Data Stacks Scenario: os-nosdn-fdio-noha Overview and Description
**********************************************************************
-Scenario: "OpenStack - FD.io" (apex-os-nosdn-fdio-noha)
+Scenario: "OpenStack - FD.io" (os-nosdn-fdio-noha)
is a scenario developed as part of the FastDataStacks
OPNFV project.
diff --git a/docs/scenarios/os-nosdn-fdio-noha/scenario.description.rst b/docs/scenarios/os-nosdn-fdio-noha/scenario.description.rst
index 059c614..aeecbb8 100755
--- a/docs/scenarios/os-nosdn-fdio-noha/scenario.description.rst
+++ b/docs/scenarios/os-nosdn-fdio-noha/scenario.description.rst
@@ -6,15 +6,16 @@
Scenario: "OpenStack - FD.io"
=============================
-Scenario: apex-os-nosdn-fdio-noha
+Scenario: os-nosdn-fdio-noha
"apex-os-nosdn-noha" is a scenario developed as part of the FastDataStacks
-OPNFV project. The main components of the "apex-os-nosdn-fdio-noha" scenario
+OPNFV project. The main components of the "os-nosdn-fdio-noha" scenario
are:
- APEX (TripleO) installer (please also see APEX installer documentation)
- Openstack (in non-HA configuration)
- FD.io/VPP virtual forwarder for tenant networking
+ - networking-vpp (Neutron ML2 mechanism driver for FD.io/VPP)
Introduction
============
@@ -24,7 +25,7 @@ require a "fast data stack" solution that provides both carrier grade
forwarding performance, scalability and open extensibility.
A key component of any NFV solution is the virtual forwarder, which needs to be
-a feature rich, high performance, highly scale virtual switch-router. It needs
+a feature rich, high performance, highly scalable virtual switch-router. It needs
to leverage hardware accelerators when available and run in user space. In
addition, it should be modular and easily extensible. The Vector Packet
Processor (VPP) supplied by the FD.io project meets these needs, in that
@@ -35,33 +36,36 @@ The "Openstack - FD.io/VPP" scenario provides the capability to realize a set
of use-cases relevant to the deployment of NFV nodes instantiated by means of
an Openstack orchestration system on FD.io/VPP enabled compute nodes.
-A deployment of the "apex-os-nosdn-fdio-noha" scenario consists of 3 or more
+A deployment of the "os-nosdn-fdio-noha" scenario consists of 3 or more
servers:
* 1 Jumphost hosting the APEX installer - running the Undercloud
* 1 Controlhost, which runs the Overcloud and Openstack services
* 1 or more Computehosts
-.. image:: FDS-nosdn-overview.png
Tenant networking leverages FD.io/VPP. Open VSwitch (OVS) is used for all other
connectivity, in particular the connectivity to public networking / the
Internet (i.e. br-ext) is performed via OVS as in any standard OpenStack
-deployment. A VPP management agent is used to setup and manage layer 2
-networking for the scenario. Neutron ML2 plugin is configured to use
-the ML2-VPP networking mechanism driver. Tenant networking can either leverage
-VLANs or plain interfaces. Layer 3 connectivity for a tenant network is
-provided centrally via qrouter on the control node. As in a standard OpenStack
-deployment, the Layer3 agent configures the qrouter and associated rulesets for
-security (security groups) and NAT (floating IPs). Public IP network
-connectivity for a tenant network is provided by interconnecting the VPP-based
-bridge domain representing the tenant network to qrouter using a tap interface.
+deployment. Neutron ML2 plugin is configured to use networking-vpp, the ML2-VPP
+networking mechanism driver. Networking-vpp also provides the VPP management
+agent used to setup and manage layer 2 networking for the scenario. Tenant
+networking can either leverage VLANs or plain interfaces. Layer 3 connectivity
+for a tenant network is provided centrally via qrouter on the control node. As
+in a standard OpenStack deployment, the Layer3 agent configures the qrouter and
+associated rulesets for security (security groups) and NAT (floating IPs). Public
+IP network connectivity for a tenant network is provided by interconnecting the
+VPP-based bridge domain representing the tenant network to qrouter using a tap
+interface.
+
The setup is depicted below:
+.. image:: FDS-nosdn-overview.png
+
Features of the scenario
------------------------
-Main features of the "apex-os-odl_l2-fdio-noha" scenario:
+Main features of the "os-nosdn-fdio-noha" scenario:
* Automated installation using the APEX installer
* Fast and scalable tenant networking using FD.io/VPP as forwarder
@@ -77,11 +81,11 @@ Main features of the "apex-os-odl_l2-fdio-noha" scenario:
Networking in this scenario using VPP
-------------------------------------
-The apex-os-nosdn-fdio-noha scenario combines components from two key open
+The os-nosdn-fdio-noha scenario combines components from two key open
source projects: OpenStack and Fast Data (FD.io). In order to make Fast Data
(FD.io) networking available to this scenario, an ML2 mechanism driver and a
light-weight control plane agent for VPP forwarder has been created. For
-details see also https://github.com/naveenjoy/networking-vpp/
+details see also https://github.com/openstack/networking-vpp.
Networking-vpp provides a Neutron ML2 mechanism driver to bring the advantages
of VPP to OpenStack deployments.It uses an etcd cluster on the control node to
@@ -102,28 +106,15 @@ faces when you read it.
Scenario Configuration
======================
-To enable the "apex-os-nosdn-fdio-noha" scenario check the appropriate settings
+To enable the "os-nosdn-fdio-noha" scenario check the appropriate settings
in the APEX configuration files. Those are typically found in /etc/opnfv-apex.
-File "deploy_settings.yaml" choose opendaylight as controller with version
-"boron" and enable vpp as forwarder::
-
- global_params:
- ha_enabled: false
-
- deploy_options:
- sdn_controller: false
- sdn_l3: false
- tacker: false
- congress: false
- sfc: false
- vpn: false
- vpp: true
+Use the file "os-nosdn-fdio-noha.yaml".
Validated deployment environments
=================================
-The "os-odl_l2-fdio-noha" scenario has been deployed and tested
+The "os-nosdn-fdio-noha" scenario has been deployed and tested
on the following sets of hardware:
* Linux Foundation lab (Chassis: Cisco UCS-B-5108 blade server,
NICs: 8 external / 32 internal 10GE ports,
@@ -137,7 +128,8 @@ on the following sets of hardware:
Limitations, Issues and Workarounds
===================================
-There are no known issues.
+For specific information on limitations and issues, please refer to the APEX
+installer release notes.
References
==========
@@ -146,5 +138,7 @@ References
* FastDataStacks OPNFV project wiki: https://wiki.opnfv.org/display/fds
* Fast Data (FD.io): https://fd.io/
* FD.io Vector Packet Processor (VPP): https://wiki.fd.io/view/VPP
- * ML2 VPP mechanisms driver: https://github.com/naveenjoy/networking-vpp/
- * OPNFV Colorado release - more information: http://www.opnfv.org/colorado
+ * ML2 VPP mechanism driver: https://github.com/openstack/networking-vpp
+ * Networking-vpp launchpad (ticket tracker) https://launchpad.net/networking-vpp
+ * Networking-vpp Wiki: https://wiki.openstack.org/wiki/Networking-vpp/
+ * APEX (TripleO based) installer: https://wiki.opnfv.org/display/apex/Apex