diff options
Diffstat (limited to 'docs')
-rw-r--r-- | docs/requirements/01-intro.rst | 14 | ||||
-rw-r--r-- | docs/requirements/02-use_cases.rst | 28 | ||||
-rw-r--r-- | docs/requirements/03-architecture.rst | 7 | ||||
-rw-r--r-- | docs/requirements/04-gaps.rst | 39 | ||||
-rw-r--r-- | docs/requirements/glossary.rst | 3 | ||||
-rw-r--r-- | docs/requirements/index.rst | 5 |
6 files changed, 38 insertions, 58 deletions
diff --git a/docs/requirements/01-intro.rst b/docs/requirements/01-intro.rst index 86fcd7f..5b2d12b 100644 --- a/docs/requirements/01-intro.rst +++ b/docs/requirements/01-intro.rst @@ -1,8 +1,8 @@ Introduction ============ -The purpose of this Requirements Project is to articulate the capabilities -and behaviours needed in Edge NFV platforms, and how they interact with +The purpose of this Requirements Project is to articulate the capabilities +and behaviours needed in Edge NFV platforms, and how they interact with centralized NFVI and MANO components of NFV solutions. @@ -13,14 +13,12 @@ Edge NFVI location has certain specific requirements related to: 1. Appropriate Tunneling for User Traffic across WAN (Ethernet, IP/MPLS) links #. Appropriate Tunneling for Management Traffic across WAN links -#. Including reachability requirements to the compute platform (‘eth0’ resilience, +#. Including reachability requirements to the compute platform (‘eth0’ resilience, this also include backup path through other media e.g. 4G/5G) #. Extending Multi-DC management to address many small "DC" locations #. Monitoring Capabilities required for a remote Compute Node #. Squaring Bare Metal with remote survivability and whether IaaS is more appropriate for remote locations -#. Security.As demarcation technology is operated in an un-trusted environment (CSP perspective) - additional means need to be implemented. Similarly, the enterprise might have concerns if - the security architecture is impacted as VNFs provide functions at different locations than +#. Security.As demarcation technology is operated in an un-trusted environment (CSP perspective) + additional means need to be implemented. Similarly, the enterprise might have concerns if + the security architecture is impacted as VNFs provide functions at different locations than the precious hardware; topics like authentication, authorization, securing the traffic. - - diff --git a/docs/requirements/02-use_cases.rst b/docs/requirements/02-use_cases.rst index bcadeb8..b70cc0f 100644 --- a/docs/requirements/02-use_cases.rst +++ b/docs/requirements/02-use_cases.rst @@ -1,31 +1,29 @@ Use cases and scenarios ======================= -There are several use cases related to Edge NFV: +There are several use cases related to Edge NFV: -1. vE-CPE. - [vE-CPE]_ is related to most popupar NFV use case where NFVI compute node is +1. vE-CPE. + [vE-CPE]_ is related to most popupar NFV use case where NFVI compute node is located at customer premises. Typical applications are virtual Firewall and Virtual BGP router; - VNF chain can be hosted in vE-CPU host and/or DC + VNF chain can be hosted in vE-CPU host and/or DC -2. Stand-alone vE-CPE. +2. Stand-alone vE-CPE. It is the same as above but all virtual appliances are hosted at the same CPE compute node. -3. Residential GW. - Similar to vE-CPE, the major difference is scale. Typical VNFs are "WAN fault monitoring", - "Performance monitoring". Ratio between deplyed vE-CPE - and Residential GW might reach 1:100 or even 1:1000, thus VNF management overhead must be minimized. - For instance, self-termination after predefined activity period seems preferable over +3. Residential GW. + Similar to vE-CPE, the major difference is scale. Typical VNFs are "WAN fault monitoring", + "Performance monitoring". Ratio between deplyed vE-CPE + and Residential GW might reach 1:100 or even 1:1000, thus VNF management overhead must be minimized. + For instance, self-termination after predefined activity period seems preferable over explicit VNF removing via management system. -4. Distributed Base station. +4. Distributed Base station. TBD. What is the difference for it? -5. Network connectivity. +5. Network connectivity. In most cases CPE is connected to Metro Ethernet [#f1]_ . -.. [#f1] In all above use cases management traffic is coming inband with tenant traffic. - - +.. [#f1] In all above use cases management traffic is coming inband with tenant traffic. diff --git a/docs/requirements/03-architecture.rst b/docs/requirements/03-architecture.rst index 80750b1..e82dfdc 100644 --- a/docs/requirements/03-architecture.rst +++ b/docs/requirements/03-architecture.rst @@ -6,8 +6,8 @@ Functional overview We foresee two OpenStack deployment models: 1. Single-cloud. Centralized OpenStack controller and ENFVI nodes are Compute nodes - 2. Multi-cloud. Each NFVI node contains OpenStack controller, thus it becomes "embedded cloud" - with single compute node + 2. Multi-cloud. Each NFVI node contains OpenStack controller, thus it becomes an "embedded cloud" + with single internal compute node Architecture Overview --------------------- @@ -22,5 +22,4 @@ This is main part. High level northbound interface specification --------------------------------------------- -What is northbound here? VIM controller? - +What is northbound here? VIM controller?
\ No newline at end of file diff --git a/docs/requirements/04-gaps.rst b/docs/requirements/04-gaps.rst index c0fbc39..0ea65c5 100644 --- a/docs/requirements/04-gaps.rst +++ b/docs/requirements/04-gaps.rst @@ -6,47 +6,37 @@ Network related gaps 1. Terminology. Consider to keep upstream/downstream terminology for the traffic leaving/coming to Edge NFV. This gives - unambiquies names 'uplink/downlink' or 'access/network' for CPE interfaces. Inside DC this traffic is + unambiquies names 'uplink/downlink' or 'access/network' for CPE interfaces. Inside DC this traffic is calles east-west and no special meaning for interfaces on compute/network node. - -2. Uplink interface capacity. - In most cases those are 1GE as opposite to DC where 10/40G interfaces are prevaling. As result +2. Uplink interface capacity. + In most cases those are 1GE as opposite to DC where 10/40G interfaces are prevaling. As result 1GE interfaces are not part of CI. - -3. Tunneling technology: +3. Tunneling technology: a. Case stand-alone NFVI - 802.1ad S-VLAN or MPLS. #. Case distributed NFVI - VXLAN or NVGRE over 802.1ad. * VXLAN and NVGRE tunnels don't support OAM check. #. All above tunneling technology don't support integrity check. #. All above tunneling technology don't support payload enryption (optional). - -4. Management traffic: - a. Management traffic should come inband with tenant traffic. - b. Management traffic shoud be easiliy come trough firewalls, i.e. single IP/port woudl be ideal +4. Management traffic: + a. Management traffic should come inband with tenant traffic. + b. Management traffic shoud be easiliy come trough firewalls, i.e. single IP/port woudl be ideal (compare with OpenStack bunch of protocols [firewall]_). - c. Management connection might be disrupted for a long period of time; once provisioned Edge NFV device - must keep its functionaly with no respect of management connection state. - -5. Resiliency: + c. Management connection might be disrupted for a long period of time; once provisioned Edge NFV device + must keep its functionaly with no respect of management connection state. +5. Resiliency: a. Network resiliency is based on dual-homing, service path shall be forked in that case. A VM presumable shall be able to select active virtual link for data forwarding - #. SLA assurance for tenant virtual link - mandatory - #. Fault propagation towards VM is mandatory. - - + #. SLA assurance for tenant virtual link - mandatory + #. Fault propagation towards VM is mandatory Hypervisor gaps --------------- - -#. Monitoring Capabilities required for a remote Compute Node; Hypervisor shall provide extended monitoring of +#. Monitoring Capabilities required for a remote Compute Node; Hypervisor shall provide extended monitoring of VM and its resource usage. - OpenStack gaps -------------- - Later shoudl be per specific component? (nova, neutron...) OpenStack Nova - 1. Management system should support dozen of thousands individual hosts. Currently each Edge Host is allocated in individual zone, is this approach scalable? 2. Host is explicitly selected effectively bypassing NOVA scheduler @@ -56,5 +46,4 @@ Deployment gaps 1. Only traffic interfaces are exposed (e.g. no eth0, no USB); SW deployment is different from DC. #. Linux shell shall not be exposed; linux CLI shall be replaced presumable by REST. #. Kernel and Hypervisor are hardened. Only OpenStack agents might be added during deployment. -#. AMT or IPMI shall not be used for SW deployment. - +#. AMT or IPMI shall not be used for SW deployment.
\ No newline at end of file diff --git a/docs/requirements/glossary.rst b/docs/requirements/glossary.rst index abe0bf6..90e0038 100644 --- a/docs/requirements/glossary.rst +++ b/docs/requirements/glossary.rst @@ -23,5 +23,4 @@ mapping/translating the OPNFV terms to terminology used in other contexts. Network Function Virtualization Infrastructure vE-CPE - Virtual Enterprise-Customer Premises Equipment - + Virtual Enterprise-Customer Premises Equipment
\ No newline at end of file diff --git a/docs/requirements/index.rst b/docs/requirements/index.rst index c0efd81..271e9e2 100644 --- a/docs/requirements/index.rst +++ b/docs/requirements/index.rst @@ -1,8 +1,5 @@ ENFV: Edge NFV requirements project -==================================== - -Contents: - +*********************************** .. toctree:: :maxdepth: 4 |