summaryrefslogtreecommitdiffstats
path: root/docs/design/usecases.rst
diff options
context:
space:
mode:
Diffstat (limited to 'docs/design/usecases.rst')
-rw-r--r--docs/design/usecases.rst169
1 files changed, 117 insertions, 52 deletions
diff --git a/docs/design/usecases.rst b/docs/design/usecases.rst
index ef9e82d..ae046f3 100644
--- a/docs/design/usecases.rst
+++ b/docs/design/usecases.rst
@@ -1,21 +1,105 @@
Use Cases
=========
-Resource Requirements
-+++++++++++++++++++++
+Implemented as of this release
+------------------------------
-Workload Placement
-------------------
+DMZ Deployment
+..............
+
+As a service provider, I need to ensure that applications which have not been
+designed for exposure in a DMZ zone, are not attached to DMZ networks.
+
+An example implementation is shown in the Congress use case test "DMZ Placement"
+(dmz.sh) in the Copper repo under the tests folder. This test:
+ * Identifies VMs connected to a DMZ (currently identified through a
+ specifically-named security group)
+ * Identifes VMs connected to a DMZ, which are by policy not allowed to be
+ (currently implemented through an image tag intended to identify images
+ that are "authorized" i.e. tested and secure, to be DMZ-connected)
+ * Reactively enforces the dmz placement rule by pausing VMs found to be in
+ violation of the policy.
+
+As implemented through OpenStack Congress:
+
+.. code::
+
+ dmz_server(x) :-
+ nova:servers(id=x,status='ACTIVE'),
+ neutronv2:ports(id, device_id, status='ACTIVE'),
+ neutronv2:security_group_port_bindings(id, sg),
+ neutronv2:security_groups(sg,name='dmz')"
+
+ dmz_placement_error(id) :-
+ nova:servers(id,name,hostId,status,tenant_id,user_id,image,flavor,az,hh),
+ not glancev2:tags(image,'dmz'),
+ dmz_server(id)"
+
+ execute[nova:servers.pause(id)] :-
+ dmz_placement_error(id),
+ nova:servers(id,status='ACTIVE')"
+
+Configuration Auditing
+......................
+
+As a service provider or tenant, I need to periodically verify that resource
+configuration requirements have not been violated, as a backup means to proactive
+or reactive policy enforcement.
+
+An example implementation is shown in the Congress use case test "SMTP Ingress"
+(smtp_ingress.sh) in the Copper repo under the tests folder. This test:
+ * Detects that a VM is associated with a security group that allows SMTP
+ ingress (TCP port 25)
+ * Adds a policy table row entry for the VM, which can be later investigated
+ for appropriate use of the security group, etc
+
+As implemented through OpenStack Congress:
+
+.. code::
+
+ smtp_ingress(x) :-
+ nova:servers(id=x,status='ACTIVE'),
+ neutronv2:ports(port_id, status='ACTIVE'),
+ neutronv2:security_groups(sg, tenant_id, sgn, sgd),
+ neutronv2:security_group_port_bindings(port_id, sg),
+ neutronv2:security_group_rules(sg, rule_id, tenant_id, remote_group_id,
+ 'ingress', ethertype, 'tcp', port_range_min, port_range_max, remote_ip),
+ lt(port_range_min, 26),
+ gt(port_range_max, 24)
+
+Reserved Resources
+..................
+
+As an NFVI provider, I need to ensure that my admins do not inadvertently
+enable VMs to connect to reserved subnets.
+
+An example implementation is shown in the Congress use case test "Reserved
+Subnet" (reserved_subnet.sh) in the Copper repo under the tests folder. This
+test:
+ * Detects that a subnet has been created in a reserved range
+ * Reactively deletes the subnet
+
+As implemented through OpenStack Congress:
+
+.. code::
+
+ reserved_subnet_error(x) :-
+ neutronv2:subnets(id=x, cidr='10.7.1.0/24')
+
+ execute[neutronv2:delete_subnet(x)] :-
+ reserved_subnet_error(x)
+
+
+For further analysis and implementation
+---------------------------------------
Affinity
........
Ensures that the VM instance is launched "with affinity to" specific resources,
-e.g. within a compute or storage cluster.
-This is analogous to the affinity rules in
-`VMWare vSphere DRS <https://pubs.vmware.com/vsphere-50/topic/com.vmware.vsphere.resmgmt.doc_50/GUID-FF28F29C-8B67-4EFF-A2EF-63B3537E6934.html>`_.
-Examples include: "Same Host Filter", i.e. place on the same compute node as a given set of instances,
-e.g. as defined in a scheduler hint list.
+e.g. within a compute or storage cluster. Examples include: "Same Host Filter",
+i.e. place on the same compute node as a given set of instances, e.g. as defined
+in a scheduler hint list.
As implemented by OpenStack Heat using server groups:
@@ -48,10 +132,10 @@ Anti-Affinity
.............
Ensures that the VM instance is launched "with anti-affinity to" specific resources,
-e.g. outside a compute or storage cluster, or geographic location.
-This filter is analogous to the anti-affinity rules in vSphere DRS.
-Examples include: "Different Host Filter", i.e. ensures that the VM instance is launched
-on a different compute node from a given set of instances, as defined in a scheduler hint list.
+e.g. outside a compute or storage cluster, or geographic location. Examples
+include: "Different Host Filter", i.e. ensures that the VM instance is launched
+on a different compute node from a given set of instances, as defined in a
+scheduler hint list.
As implemented by OpenStack Heat using scheduler hints:
@@ -88,46 +172,27 @@ As implemented by OpenStack Heat using scheduler hints:
- network: {get_param: network}
scheduler_hints: {different_host: {get_resource: serv1}}
-DMZ Deployment
-..............
-As a service provider,
-I need to ensure that applications which have not been designed for exposure in a DMZ zone,
-are not attached to DMZ networks.
-
-Configuration Auditing
-----------------------
-
-As a service provider or tenant,
-I need to periodically verify that resource configuration requirements have not been violated,
-as a backup means to proactive or reactive policy enforcement.
-
-Generic Policy Requirements
-+++++++++++++++++++++++++++
-
-NFVI Self-Service Constraints
------------------------------
-
-As an NFVI provider,
-I need to ensure that my self-service tenants are not able to configure their VNFs in ways
-that would impact other tenants or the reliability, security, etc of the NFVI.
-
Network Access Control
......................
-Networks connected to VMs must be public, or owned by someone in the VM owner's group.
+Networks connected to VMs must be public, or owned by someone in the VM owner's
+group.
This use case captures the intent of the following sub-use-cases:
* Link Mirroring: As a troubleshooter,
- I need to mirror traffic from physical or virtual network ports so that I can investigate trouble reports.
+ I need to mirror traffic from physical or virtual network ports so that I
+ can investigate trouble reports.
* Link Mirroring: As a NFVaaS tenant,
- I need to be able to mirror traffic on my virtual network ports so that I can investigate trouble reports.
+ I need to be able to mirror traffic on my virtual network ports so that I
+ can investigate trouble reports.
* Unauthorized Link Mirroring Prevention: As a NFVaaS tenant,
- I need to be able to prevent other tenants from mirroring traffic on my virtual network ports
- so that I can protect the privacy of my service users.
+ I need to be able to prevent other tenants from mirroring traffic on my
+ virtual network ports so that I can protect the privacy of my service users.
* Link Mirroring Delegation: As a NFVaaS tenant,
- I need to be able to allow my NFVaaS SP customer support to mirror traffic on my virtual network ports
- so that they can assist in investigating trouble reports.
+ I need to be able to allow my NFVaaS SP customer support to mirror traffic
+ on my virtual network ports so that they can assist in investigating trouble
+ reports.
As implemented through OpenStack Congress:
@@ -172,18 +237,17 @@ As implemented through OpenStack Congress:
ldap:group(user1, g),
ldap:group(user2, g)
-Resource Management
--------------------
-
Resource Reclamation
....................
-As a service provider or tenant,
-I need to be informed of VMs that are under-utilized so that I can reclaim the VI resources.
-(example from `RuleYourCloud blog <http://ruleyourcloud.com/2015/03/12/scaling-up-congress.html>`_)
+As a service provider or tenant, I need to be informed of VMs that are
+under-utilized so that I can reclaim the VI resources. (example from
+`RuleYourCloud blog <http://ruleyourcloud.com/2015/03/12/scaling-up-congress.html>`_)
As implemented through OpenStack Congress:
+*Note: untested example...*
+
.. code::
reclaim_server(vm) :-
@@ -198,11 +262,13 @@ As implemented through OpenStack Congress:
Resource Use Limits
...................
-As a tenant or service provider,
-I need to be automatically terminate an instance that has run for a pre-agreed maximum duration.
+As a tenant or service provider, I need to be automatically terminate an
+instance that has run for a pre-agreed maximum duration.
As implemented through OpenStack Congress:
+*Note: untested example...*
+
.. code::
terminate_server(vm) :-
@@ -214,4 +280,3 @@ As implemented through OpenStack Congress:
nova:servers(vm, vm_name, user_id),
keystone:users(user_id, email)
-