diff options
Diffstat (limited to 'docs/requirements/use_cases')
-rw-r--r-- | docs/requirements/use_cases/l3vpn_ecmp.rst | 7 | ||||
-rw-r--r-- | docs/requirements/use_cases/l3vpn_hub_and_spoke.rst | 12 | ||||
-rw-r--r-- | docs/requirements/use_cases/programmable_provisioning.rst | 28 | ||||
-rw-r--r-- | docs/requirements/use_cases/service_binding_pattern.rst (renamed from docs/requirements/use_cases/service-binding-pattern.rst) | 13 |
4 files changed, 33 insertions, 27 deletions
diff --git a/docs/requirements/use_cases/l3vpn_ecmp.rst b/docs/requirements/use_cases/l3vpn_ecmp.rst index b3d5b63..7bcb64f 100644 --- a/docs/requirements/use_cases/l3vpn_ecmp.rst +++ b/docs/requirements/use_cases/l3vpn_ecmp.rst @@ -31,15 +31,16 @@ subnet 10.1.1.0/24 and assigned the same IP addresses 10.1.1.5. VNF 2 and VNF 3 on host A and B respectively, attached to subnet 10.1.1.0/24, and assigned different IP addresses 10.1.1.6 and 10.1.1.3 respectively. -Here, the Network VRF Policy Resource is ``ECMP/AnyCast``. Traffic to **Anycast 10.1.1.5** -can be load split from either WAN GW or another VM like G5. +Here, the Network VRF Policy Resource is ``ECMP/AnyCast``. Traffic to the +anycast IP **10.1.1.5** can be load split from either WAN GW or another VM like +G5. Current implementation ~~~~~~~~~~~~~~~~~~~~~~ -Support for creating and managing L3VPNs is in general available in OpenStack +Support for creating and managing L3VPNs is, in general, available in OpenStack Neutron by means of the BGPVPN project [BGPVPN]_. However, the BGPVPN project does not yet fully support ECMP as described in the following. diff --git a/docs/requirements/use_cases/l3vpn_hub_and_spoke.rst b/docs/requirements/use_cases/l3vpn_hub_and_spoke.rst index 07004ef..17459b6 100644 --- a/docs/requirements/use_cases/l3vpn_hub_and_spoke.rst +++ b/docs/requirements/use_cases/l3vpn_hub_and_spoke.rst @@ -8,12 +8,12 @@ Hub and Spoke Case Description ~~~~~~~~~~~ -A Hub-and-spoke topology comprises two types of network entities: a central hub -and multiple spokes. The corresponding VRFs of the hub and the spokes are -configured to import and export routes such that all traffic is routed through -the hub. As a result, spokes cannot communicate with each other directly, but -only indirectly via the central hub. Hence, the hub typically hosts central network -functions such firewalls. +In a traditional Hub-and-spoke topology there are two types of network entities: +a central hub and multiple spokes. The corresponding VRFs of the hub and the +spokes are configured to import and export routes such that all traffic is +directed through the hub. As a result, spokes cannot communicate with each other +directly, but only indirectly via the central hub. Hence, the hub typically +hosts central network functions such firewalls. Furthermore, there is no layer 2 connectivity between the VNFs. diff --git a/docs/requirements/use_cases/programmable_provisioning.rst b/docs/requirements/use_cases/programmable_provisioning.rst index 8d143f3..d66a54c 100644 --- a/docs/requirements/use_cases/programmable_provisioning.rst +++ b/docs/requirements/use_cases/programmable_provisioning.rst @@ -1,24 +1,27 @@ .. This work is licensed under a Creative Commons Attribution 4.0 International License. .. http://creativecommons.org/licenses/by/4.0 -Programmable Provisioning of Provider networks +Programmable Provisioning of Provider Networks ---------------------------------------------- Description ~~~~~~~~~~~ -In NFV environment the VNFM (consumer of OpenStack IaaS API) have no -administrative rights, however in the telco domain provider networks are used in -some cases. When a provider network is created administrative rights are needed -what in the case of a VNFM without administrative rights needs manual work. -It shall be possible to configure provider networks without administrative rights. -It should be possible to assign the capability to create provider networks to -any roles. + +In a NFV environment the VNFMs (Virtual Network Function Manager) are consumers +of the OpenStack IaaS API. They are often deployed without administrative rights +on top of the NFVI platform. Furthermore, in the telco domain provider networks +are often used. However, when a provider network is created administrative +rights are needed what in the case of a VNFM without administrative rights +requires additional manual configuration work. It shall be possible to +configure provider networks without administrative rights. It should be +possible to assign the capability to create provider networks to any roles. + Derived Requirements ~~~~~~~~~~~~~~~~~~~~~ - Authorize the possibility of provider network creation based on policy - There should be a new entry in :code:`policy.json` which controls the provider network creation - Default policy of this new entry should be :code:`rule:admin_or_owner`. - - This policy should be respected by neutron API + - This policy should be respected by the Neutron API Northbound API / Workflow +++++++++++++++++++++++++ @@ -34,5 +37,8 @@ Only admin users can manage provider networks [OS-NETWORKING-GUIDE-ML2]_. Potential implementation ~~~~~~~~~~~~~~~~~~~~~~~~ - - Policy engine shall be able to handle a new provider network creation and modification related policy - - When a provider network is created or modified neutron should check the authority with the policy engine instead of requesting administrative rights + - Policy engine shall be able to handle a new provider network creation and + modification related policy. + - When a provider network is created or modified neutron should check the + authority with the policy engine instead of requesting administrative + rights. diff --git a/docs/requirements/use_cases/service-binding-pattern.rst b/docs/requirements/use_cases/service_binding_pattern.rst index a5088a3..8abcf7a 100644 --- a/docs/requirements/use_cases/service-binding-pattern.rst +++ b/docs/requirements/use_cases/service_binding_pattern.rst @@ -18,7 +18,7 @@ this use case: Typically, a vNIC is bound to a single network. Hence, in order to directly connect a service function to multiple networks at the same time, multiple vNICs - are needed - each vNIC binding the service function to a separate network. For + are needed - each vNIC binds the service function to a separate network. For service functions requiring connectivity to a large number of networks, this approach does not scale as the number of vNICs per VM is limited and additional vNICs occupy additional resources on the hypervisor. @@ -146,12 +146,11 @@ classic Neutron ports. Current Implementation ^^^^^^^^^^^^^^^^^^^^^^ -The core Neutron API [**describe what is meant by that**] does not follow the -service binding design pattern. For example, a port has to exist in a Neutron -network - specifically it has to be created for a particular Neutron network. It -is not possible to create just a port and assign it to a network later on as -needed. As a result, a port cannot be moved from one network to another, for -instance. +The core Neutron API does not follow the service binding design pattern. For +example, a port has to exist in a Neutron network - specifically it has to be +created for a particular Neutron network. It is not possible to create just a +port and assign it to a network later on as needed. As a result, a port cannot +be moved from one network to another, for instance. Regarding the shared service function use case outlined above, there is an ongoing activity in Neutron [VLAN-AWARE-VMs]_. The solution proposed by this |