summaryrefslogtreecommitdiffstats
path: root/docs
diff options
context:
space:
mode:
Diffstat (limited to 'docs')
-rw-r--r--docs/requirements/01-intro.rst4
-rw-r--r--docs/requirements/02-use_cases.rst54
2 files changed, 44 insertions, 14 deletions
diff --git a/docs/requirements/01-intro.rst b/docs/requirements/01-intro.rst
index 3da6f86..4d15cc7 100644
--- a/docs/requirements/01-intro.rst
+++ b/docs/requirements/01-intro.rst
@@ -13,9 +13,9 @@ Edge NFVI location has certain specific requirements related to:
1. Appropriate Tunneling for User Traffic across WAN (Ethernet, IP/MPLS) links
#. Appropriate Tunneling for Management Traffic across WAN links
-#. Including reachability requirements to the compute platform (‘eth0’ resilience,
+#. Including reachability requirements to the compute platform ('eth0' resilience,
this also include backup path through other media e.g. 4G/5G)
-#. Extending Multi-DC management to address many small "DC" locations
+#. Extending Multi-data center management to address many small or micro data center locations
#. Monitoring Capabilities required for a remote Compute Node
#. Squaring Bare Metal with remote survivability and whether IaaS is more appropriate for remote locations
#. Security. As demarcation technology is operated in an un-trusted environment (CSP perspective)
diff --git a/docs/requirements/02-use_cases.rst b/docs/requirements/02-use_cases.rst
index 6777a02..828cdab 100644
--- a/docs/requirements/02-use_cases.rst
+++ b/docs/requirements/02-use_cases.rst
@@ -1,29 +1,59 @@
Use cases and scenarios
=======================
-There are several use cases related to Edge NFV:
+There are several use cases related to Edge NFV.
+This section will briefly describe them, along with the issues or complexities that they
+introduce versus a typical data center (DC) deployment.
+
1. vE-CPE.
- [vE-CPE]_ is related to most popular NFV use case where NFVI compute node is
- located at customer premises. Typical applications are virtual Firewall and Virtual BGP router;
- VNF chain can be hosted in vE-CPU host and/or DC
+ [vE-CPE]_ is related to most popular NFV use case where a NFVI compute node is
+ located at customer premises.
+ Typical applications are virtual firewall and virtual router to replace physical equivalents.
+ The service chain can include VNFs hosted in vE-CPE host and/or a centralized data center.
+ Complexities include:
+
+ * This application is very cost-sensitive, so the server will typically be lower performance
+ than in the DC.
+ * There may not be layer 2/Ethernet connectivity at the deployment site, so tunneling may be required.
+ * There may not be initial connectivity to the node, so some sort of zero-touch protocol may be required.
-2. Stand-alone vE-CPE.
- It is the same as above but all virtual appliances are hosted at the same CPE compute node.
+#. Stand-alone vE-CPE.
+ It is the same as above but all virtual network functions are hosted at the same CPE compute node.
-3. Residential GW.
+#. Residential GW.
Similar to vE-CPE, the major difference is scale. Typical VNFs are "WAN fault monitoring",
- "Performance monitoring". Ratio between deployed vE-CPE
- and Residential GW might reach 1:100 or even 1:1000, thus VNF management overhead must be minimized.
+ "Performance monitoring".
+ Ratio between deployed vE-CPE and Residential GW might reach 1:100 or even 1:1000,
+ so VNF management overhead must be minimized.
For instance, self-termination after predefined activity period seems preferable over
explicit VNF removing via management system.
-4. Distributed Base station.
+#. Distributed Base station.
TBD. What is the difference for it?
-5. Network connectivity.
+#. Network connectivity.
In most cases CPE is connected to Metro Ethernet [#f1]_ .
-
+#. Micro Data Center
+ NFVI resources may be located at the edge of the network for the use cases listed above.
+ Doing so increases the scale of the clouds or locations that must be orchestrated and controlled.
+ If OpenStack is run in a distributed fashion, with a central node controlling distributed
+ NFVI servers, the following issues may be seen:
+
+ * Lack of security between OpenStack client and server.
+ * Lack of compatibility between different versions of OpenStack.
+ * Scalability of OpenStack.
+ * Operation in low speed or lossy networks is complicated by the amount of messaging required.
+ * OpenStack communications are not secured. This creates a vulnerability in a distributed application.
+ * OpenStack numbers VNF ports in a sequential manner, with the sequence serially numbered
+ in the VM/VNF.
+ The difficulty comes when trying to verify that the LAN has been connected to the correct LAN port,
+ the WAN has been connected to the correct WAN port and so on.
+ * While OpenStack provides a rich set of APIs, critical support is lacking:
+
+ * No APIs for ssh access to VM/VNFs.
+ * No APIs for port mirroring in Neutron.
+ * No APIs for OpenStack oversubscription parameter setting
.. [#f1] In all above use cases management traffic is coming inband with tenant traffic.