aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
-rw-r--r--docs/release/userguide/UC01-feature.userguide.rst6
-rw-r--r--docs/release/userguide/UC02-feature.userguide.rst49
-rw-r--r--docs/release/userguide/UC03-feature.userguide.rst53
-rw-r--r--docs/release/userguide/auto-UC02-TC-mapping.pngbin0 -> 48301 bytes
-rw-r--r--docs/release/userguide/auto-UC02-cardinalities.pngbin0 -> 36684 bytes
-rw-r--r--docs/release/userguide/auto-UC02-data1.jpgbin122920 -> 51570 bytes
-rw-r--r--docs/release/userguide/auto-UC02-data2.jpgbin378585 -> 217832 bytes
-rw-r--r--docs/release/userguide/auto-UC02-data3.jpgbin462367 -> 274235 bytes
-rw-r--r--docs/release/userguide/auto-UC02-logic.pngbin0 -> 39141 bytes
-rw-r--r--docs/release/userguide/auto-UC03-TC-archit.pngbin0 -> 47579 bytes
-rw-r--r--docs/release/userguide/auto-UC03-TestCases.pngbin0 -> 20920 bytes
-rw-r--r--docs/release/userguide/index.rst2
-rw-r--r--lib/auto/testcase/resiliency/AutoResilItfCloud.py150
-rw-r--r--lib/auto/testcase/resiliency/AutoResilMain.py1
-rw-r--r--lib/auto/testcase/resiliency/AutoResilMgTestDef.py120
-rw-r--r--lib/auto/testcase/resiliency/clouds.yaml42
16 files changed, 313 insertions, 110 deletions
diff --git a/docs/release/userguide/UC01-feature.userguide.rst b/docs/release/userguide/UC01-feature.userguide.rst
index 5cf38e1..ea02bad 100644
--- a/docs/release/userguide/UC01-feature.userguide.rst
+++ b/docs/release/userguide/UC01-feature.userguide.rst
@@ -34,7 +34,7 @@ Preconditions:
Main Success Scenarios:
-* lifecycle management - stop, stop, scale (dependent upon telemetry)
+* lifecycle management - start, stop, scale (dependent upon telemetry)
* recovering from faults (detect, determine appropriate response, act); i.e. exercise closed-loop policy engine in ONAP
@@ -47,7 +47,7 @@ Details on the test cases corresponding to this use case:
* Environment check
- * Basic environment check: Create test script to check basic VIM (OpenStack), ONAP, and VNF are up and running
+ * Basic environment check: Create test script to check basic VIM (OpenStack), ONAP, and VNF(s) are up and running
* VNF lifecycle management
@@ -55,7 +55,7 @@ Details on the test cases corresponding to this use case:
* Tacker Monitoring Driver (VNFMonitorPing):
- * Write Tacker Monitor driver to handle monitor_call and based on return state value create custom events
+ * Write Tacker Monitor driver to handle monitor_call and, based on return state value, create custom events
* If Ping to VNF fails, trigger below events
* Event 1 : Collect failure logs from VNF
diff --git a/docs/release/userguide/UC02-feature.userguide.rst b/docs/release/userguide/UC02-feature.userguide.rst
index 0ecb7de..3ed5781 100644
--- a/docs/release/userguide/UC02-feature.userguide.rst
+++ b/docs/release/userguide/UC02-feature.userguide.rst
@@ -8,7 +8,8 @@
Auto User Guide: Use Case 2 Resiliency Improvements Through ONAP
================================================================
-This document provides the user guide for Fraser release of Auto, specifically for Use Case 2: Resiliency Improvements Through ONAP.
+This document provides the user guide for Fraser release of Auto,
+specifically for Use Case 2: Resiliency Improvements Through ONAP.
Description
@@ -22,6 +23,8 @@ This use case illustrates VNF failure recovery time reduction with ONAP, thanks
The benefit for NFV edge service providers is to assess what degree of added VIM+NFVI platform resilience for VNFs is obtained by leveraging ONAP closed-loop control, vs. VIM+NFVI self-managed resilience (which may not be aware of the VNF or the corresponding end-to-end Service, but only of underlying resources such as VMs and servers).
+Also, a problem, or challenge, may not necessarily be a failure (which could also be recovered by other layers): it could be an issue leading to suboptimal performance, without failure. A VNF management layer as provided by ONAP may detect such non-failure problems, and provide a recovery solution which no other layer could provide in a given deployment.
+
Preconditions:
@@ -36,7 +39,7 @@ Different types of problems can be simulated, hence the identification of multip
.. image:: auto-UC02-testcases.jpg
-Description of simulated problems/challenges:
+Description of simulated problems/challenges, leading to various test cases:
* Physical Infra Failure
@@ -60,7 +63,6 @@ Description of simulated problems/challenges:
-
Test execution high-level description
=====================================
@@ -76,7 +78,7 @@ The second MSC illustrates the pattern of all test cases for the Resiliency Impr
* simulate the chosen problem (a.k.a. a "Challenge") for this test case, for example suspend a VM which may be used by a VNF
* start tracking the target VNF of this test case
* measure the ONAP-orchestrated VNF Recovery Time
-* then the test stops simulating the problem (for example: resume the VM that was suspended),
+* then the test stops simulating the problem (for example: resume the VM that was suspended)
In parallel, the MSC also shows the sequence of events happening in ONAP, thanks to its configuration to provide Service Assurance for the VNF.
@@ -86,21 +88,21 @@ In parallel, the MSC also shows the sequence of events happening in ONAP, thanks
Test design: data model, implementation modules
===============================================
-The high-level design of classes identifies several entities:
+The high-level design of classes identifies several entities, described as follows:
-* Test Case: as identified above, each is a special case of the overall use case (e.g., categorized by challenge type)
-* Test Definition: gathers all the information necessary to run a certain test case
-* Metric Definition: describes a certain metric that may be measured, in addition to Recovery Time
-* Challenge Definition: describe the challenge (problem, failure, stress, ...) simulated by the test case
-* Recipient: entity that can receive commands and send responses, and that is queried by the Test Definition or Challenge Definition (a recipient would be typically a management service, with interfaces (CLI or API) for clients to query)
-* Resources: with 3 types (VNF, cloud virtual resource such as a VM, physical resource such as a server)
+* ``Test Case`` : as identified above, each is a special case of the overall use case (e.g., categorized by challenge type)
+* ``Test Definition`` : gathers all the information necessary to run a certain test case
+* ``Metric Definition`` : describes a certain metric that may be measured for a Test Case, in addition to Recovery Time
+* ``Challenge Definition`` : describe the challenge (problem, failure, stress, ...) simulated by the test case
+* ``Recipient`` : entity that can receive commands and send responses, and that is queried by the Test Definition or Challenge Definition (a recipient would be typically a management service, with interfaces (CLI or API) for clients to query)
+* ``Resources`` : with 3 types (VNF, cloud virtual resource such as a VM, physical resource such as a server)
Three of these entities have execution-time corresponding classes:
-* Test Execution, which captures all the relevant data of the execution of a Test Definition
-* Challenge Execution, which captures all the relevant data of the execution of a Challenge Definition
-* Metric Value, which captures the a quantitative measurement of a Metric Definition (with a timestamp)
+* ``Test Execution`` , which captures all the relevant data of the execution of a Test Definition
+* ``Challenge Execution`` , which captures all the relevant data of the execution of a Challenge Definition
+* ``Metric Value`` , which captures the quantitative measurement of a Metric Definition (with a timestamp)
.. image:: auto-UC02-data1.jpg
@@ -122,13 +124,28 @@ The module design is straightforward: functions and classes for managing data, f
.. image:: auto-UC02-module1.jpg
-This last diagram shows the test user menu functions:
+This last diagram shows the test user menu functions, when used interactively:
.. image:: auto-UC02-module2.jpg
-In future releases of Auto, testing environments such as FuncTest and Yardstick might be leveraged.
+In future releases of Auto, testing environments such as Robot, FuncTest and Yardstick might be leveraged. Use Case code will then be invoked by API, not by a CLI interaction.
Also, anonymized test results could be collected from users willing to share them, and aggregates could be
maintained as benchmarks.
+As further illustration, the next figure shows cardinalities of class instances: one Test Definition per Test Case, multiple Test Executions per Test Definition, zero or one Recovery Time Metric Value per Test Execution (zero if the test failed for any reason, including if ONAP failed to recover the challenge), etc.
+
+.. image:: auto-UC02-cardinalities.png
+
+
+In this particular implementation, both Test Definition and Challenge Definition classes have a generic execution method (e.g., ``run_test_code()`` for Test Definition) which can invoke a particular script, by way of an ID (which can be configured, and serves as a script selector for each Test Definition instance). The overall test execution logic between classes is show in the next figure.
+
+.. image:: auto-UC02-logic.png
+
+The execution of a test case starts with invoking the generic method from Test Definition, which then creates Execution instances, invokes Challenge Definition methods, performs the Recovery time calculation, performs script-specific actions, and writes results to the CSV files.
+
+Finally, the following diagram show a mapping between these class instances and the initial test case design. It corresponds to the test case which simulates a VM failure, and shows how the OpenStack SDK API is invoked (with a connection object) by the Challenge Definition methods, to suspend and resume a VM.
+
+.. image:: auto-UC02-TC-mapping.png
+
diff --git a/docs/release/userguide/UC03-feature.userguide.rst b/docs/release/userguide/UC03-feature.userguide.rst
index 5f28158..cf96981 100644
--- a/docs/release/userguide/UC03-feature.userguide.rst
+++ b/docs/release/userguide/UC03-feature.userguide.rst
@@ -15,16 +15,25 @@ specifically for Use Case 3: Enterprise vCPE.
Description
===========
-This Use Case shows how ONAP can help ensuring that virtual CPEs (including vFW: virtual firewalls) in Edge Cloud are enterprise-grade.
+This Use Case shows how ONAP can help ensure that virtual CPEs (including vFW: virtual firewalls) in Edge Cloud are enterprise-grade.
+Other vCPE examples: vAAA, vDHCP, vDNS, vGW, vBNG, vRouter, ...
-ONAP operations include a verification process for VNF onboarding (i.e. inclusion in the ONAP catalog), with multiple Roles (designer, tester, governor, operator), responsible for approving proposed VNFs (as VSPs (Vendor Software Products), and eventually as end-to-end Services).
+ONAP operations include a verification process for VNF onboarding (i.e., inclusion in the ONAP catalog), with multiple Roles (Designer, Tester, Governor, Operator), responsible for approving proposed VNFs (as VSPs (Vendor Software Products), and eventually as end-to-end Services).
-This process guarantees a minimum level of quality of onboarded VNFs. If all deployed vCPEs are only chosen from such an approved ONAP catalog, the resulting deployed end-to-end vCPE services will meet enterprise-grade requirements. ONAP provides a NBI in addition to a standard portal, thus enabling a programmatic deployment of VNFs, still conforming to ONAP processes.
+This process guarantees a minimum level of quality of onboarded VNFs. If all deployed vCPEs are only chosen from such an approved ONAP catalog, the resulting deployed end-to-end vCPE services will meet enterprise-grade requirements. ONAP provides a NBI (currently HTTP-based) in addition to a standard GUI portal, thus enabling a programmatic deployment of VNFs, still conforming to ONAP processes.
-Moreover, ONAP also comprises real-time monitoring (by the DCAE component), which monitors performance for SLAs, can adjust allocated resources accordingly (elastic adjustment at VNF level), and can ensure High Availability.
+Moreover, ONAP also comprises real-time monitoring (by the DCAE component), which can perform the following functions:
+
+* monitor VNF performance for SLAs
+* adjust allocated resources accordingly (elastic adjustment at VNF level: scaling out and in, possibly also scaling up and down)
+* ensure High Availability (restoration of failed or underperforming services)
DCAE executes directives coming from policies described in the Policy Framework, and closed-loop controls described in the CLAMP component.
+ONAP can perform the provisioning side of a BSS Order Management application handling vCPE orders.
+
+Additional processing can be added to ONAP (internally as configured policies and closed-loop controls, or externally as separate systems): Path Computation Element and Load Balancing, and even telemetry-based Network Artificial Intelligence.
+
Finally, this automated approach also reduces costs, since repetitive actions are designed once and executed multiple times, as vCPEs are instantiated and decommissioned (frequent events, given the variability of business activity, and a Small Business market similar to the Residential market: many contract updates resulting in many vCPE changes).
NFV edge service providers need to provide site2site, site2dc (Data Center) and site2internet services to tenants both efficiently and safely, by deploying such qualified enterprise-grade vCPE.
@@ -42,34 +51,50 @@ Main Success Scenarios:
* VNF spin-up
- * vCPE spin-up: MSO calls the VNFM to spin up a vCPE instance from the catalog and then updates the active VNF list
* vFW spin-up: MSO calls the VNFM to spin up a vFW instance from the catalog and then updates the active VNF list
+ * other vCPEs spin-up: MSO calls the VNFM to spin up a vCPE instance from the catalog and then updates the active VNF list
* site2site
* L3VPN service subscribing: MSO calls the SDNC to create VXLAN tunnels to carry L2 traffic between client's ThinCPE and SP's vCPE, and enables vCPE to route between different sites.
* L3VPN service unsubscribing: MSO calls the SDNC to destroy tunnels and routes, thus disable traffic between different sites.
+* site2dc (site to Data Center) by VPN
+* site2internet
+* scaling control (start with scaling out/in)
See `ONAP description of vCPE use case <https://wiki.onap.org/display/DW/Use+Case+proposal%3A+Enterprise+vCPE>`_ for more details, including MSCs.
Details on the test cases corresponding to this use case:
-* VNF Management
+* vCPE VNF deployment
+
+ * Spin up a vFW instance by calling NBI of the orchestrator.
+ * Following the vFW example and pattern, spin up other vCPE instances
+
+* vCPE VNF networking
+
+ * Subscribe/Unsubscribe to a VPN service: configure tenant/subscriber for vCPE, configure VPN service
+ * Subscribe/Unsubscribe to an Internet Access service: configure tenant/subscriber for vCPE, configure Internet Access service
+
+* vCPE VNF Scaling
+
+ * ONAP-based VNF Scale-out and Scale-in (using measurements arriving in DCAE, policies/CLAMP or external system performing LB function)
+ * later, possibly also scale-up and scale-down
+
+
+
+The following diagram shows these test cases:
+
+.. image:: auto-UC03-TestCases.png
- * Spin up a vCPE instance: Spin up a vCPE instance, by calling NBI of the orchestrator.
- * Spin up a vFW instance: Spin up a vFW instance, by calling NBI of the orchestrator.
-* VPN as a Service
+Illustration of test cases mapped to architecture, with possible external systems (BSS for Order Management, PCE+LB, Network AI:
- * Subscribe to a VPN service: Subscribe to a VPN service, by calling NBI of the orchestrator.
- * Unsubscribe to a VPN service: Unsubscribe to a VPN service, by calling NBI of the orchestrator.
+.. image:: auto-UC03-TC-archit.png
-* Internet as a Service
- * Subscribe to an Internet service: Subscribe to an Internet service, by calling NBI of the orchestrator.
- * Unsubscribe to an Internet service: Unsubscribe to an Internet service, by calling NBI of the orchestrator.
Test execution high-level description
diff --git a/docs/release/userguide/auto-UC02-TC-mapping.png b/docs/release/userguide/auto-UC02-TC-mapping.png
new file mode 100644
index 0000000..c2dd0db
--- /dev/null
+++ b/docs/release/userguide/auto-UC02-TC-mapping.png
Binary files differ
diff --git a/docs/release/userguide/auto-UC02-cardinalities.png b/docs/release/userguide/auto-UC02-cardinalities.png
new file mode 100644
index 0000000..10dd3b0
--- /dev/null
+++ b/docs/release/userguide/auto-UC02-cardinalities.png
Binary files differ
diff --git a/docs/release/userguide/auto-UC02-data1.jpg b/docs/release/userguide/auto-UC02-data1.jpg
index 02a60ba..62526c5 100644
--- a/docs/release/userguide/auto-UC02-data1.jpg
+++ b/docs/release/userguide/auto-UC02-data1.jpg
Binary files differ
diff --git a/docs/release/userguide/auto-UC02-data2.jpg b/docs/release/userguide/auto-UC02-data2.jpg
index 7096c96..df73a94 100644
--- a/docs/release/userguide/auto-UC02-data2.jpg
+++ b/docs/release/userguide/auto-UC02-data2.jpg
Binary files differ
diff --git a/docs/release/userguide/auto-UC02-data3.jpg b/docs/release/userguide/auto-UC02-data3.jpg
index 8e8921d..3f84a20 100644
--- a/docs/release/userguide/auto-UC02-data3.jpg
+++ b/docs/release/userguide/auto-UC02-data3.jpg
Binary files differ
diff --git a/docs/release/userguide/auto-UC02-logic.png b/docs/release/userguide/auto-UC02-logic.png
new file mode 100644
index 0000000..90b41dd
--- /dev/null
+++ b/docs/release/userguide/auto-UC02-logic.png
Binary files differ
diff --git a/docs/release/userguide/auto-UC03-TC-archit.png b/docs/release/userguide/auto-UC03-TC-archit.png
new file mode 100644
index 0000000..95d641b
--- /dev/null
+++ b/docs/release/userguide/auto-UC03-TC-archit.png
Binary files differ
diff --git a/docs/release/userguide/auto-UC03-TestCases.png b/docs/release/userguide/auto-UC03-TestCases.png
new file mode 100644
index 0000000..bb84a57
--- /dev/null
+++ b/docs/release/userguide/auto-UC03-TestCases.png
Binary files differ
diff --git a/docs/release/userguide/index.rst b/docs/release/userguide/index.rst
index 7cfbe94..dd308dc 100644
--- a/docs/release/userguide/index.rst
+++ b/docs/release/userguide/index.rst
@@ -16,7 +16,7 @@ OPNFV Auto (ONAP-Automated OPNFV) User Guide
.. toctree::
:numbered:
- :maxdepth: 2
+ :maxdepth: 3
UC01-feature.userguide.rst
UC02-feature.userguide.rst
diff --git a/lib/auto/testcase/resiliency/AutoResilItfCloud.py b/lib/auto/testcase/resiliency/AutoResilItfCloud.py
index 69c5327..302a662 100644
--- a/lib/auto/testcase/resiliency/AutoResilItfCloud.py
+++ b/lib/auto/testcase/resiliency/AutoResilItfCloud.py
@@ -33,14 +33,15 @@
######################################################################
# import statements
import AutoResilGlobal
+import time
# for method 1 and 2
-#import openstack
+import openstack
#for method 3
-from openstack import connection
+#from openstack import connection
-def os_list_servers(conn):
+def openstack_list_servers(conn):
"""List OpenStack servers."""
# see https://docs.openstack.org/python-openstacksdk/latest/user/proxies/compute.html
if conn != None:
@@ -49,14 +50,20 @@ def os_list_servers(conn):
try:
i=1
for server in conn.compute.servers():
- print('Server',str(i),'\n',server,'n')
+ print('Server',str(i))
+ print(' Name:',server.name)
+ print(' ID:',server.id)
+ print(' key:',server.key_name)
+ print(' status:',server.status)
+ print(' AZ:',server.availability_zone)
+ print('Details:\n',server)
i+=1
except Exception as e:
print("Exception:",type(e), e)
print("No Servers\n")
-def os_list_networks(conn):
+def openstack_list_networks(conn):
"""List OpenStack networks."""
# see https://docs.openstack.org/python-openstacksdk/latest/user/proxies/network.html
if conn != None:
@@ -65,14 +72,14 @@ def os_list_networks(conn):
try:
i=1
for network in conn.network.networks():
- print('Network',str(i),'\n',network,'n')
+ print('Network',str(i),'\n',network,'\n')
i+=1
except Exception as e:
print("Exception:",type(e), e)
print("No Networks\n")
-def os_list_volumes(conn):
+def openstack_list_volumes(conn):
"""List OpenStack volumes."""
# see https://docs.openstack.org/python-openstacksdk/latest/user/proxies/block_storage.html
# note: The block_storage member will only be added if the service is detected.
@@ -82,14 +89,20 @@ def os_list_volumes(conn):
try:
i=1
for volume in conn.block_storage.volumes():
- print('Volume',str(i),'\n',volume,'n')
+ print('Volume',str(i))
+ print(' Name:',volume.name)
+ print(' ID:',volume.id)
+ print(' size:',volume.size)
+ print(' status:',volume.status)
+ print(' AZ:',volume.availability_zone)
+ print('Details:\n',volume)
i+=1
except Exception as e:
print("Exception:",type(e), e)
print("No Volumes\n")
-
-def os_list_users(conn):
+
+def openstack_list_users(conn):
"""List OpenStack users."""
# see https://docs.openstack.org/python-openstacksdk/latest/user/guides/identity.html
if conn != None:
@@ -98,13 +111,13 @@ def os_list_users(conn):
try:
i=1
for user in conn.identity.users():
- print('User',str(i),'\n',user,'n')
+ print('User',str(i),'\n',user,'\n')
i+=1
except Exception as e:
print("Exception:",type(e), e)
print("No Users\n")
-
-def os_list_projects(conn):
+
+def openstack_list_projects(conn):
"""List OpenStack projects."""
# see https://docs.openstack.org/python-openstacksdk/latest/user/guides/identity.html
if conn != None:
@@ -113,14 +126,14 @@ def os_list_projects(conn):
try:
i=1
for project in conn.identity.projects():
- print('Project',str(i),'\n',project,'n')
+ print('Project',str(i),'\n',project,'\n')
i+=1
except Exception as e:
print("Exception:",type(e), e)
print("No Projects\n")
-
-def os_list_domains(conn):
+
+def openstack_list_domains(conn):
"""List OpenStack domains."""
# see https://docs.openstack.org/python-openstacksdk/latest/user/guides/identity.html
if conn != None:
@@ -129,7 +142,7 @@ def os_list_domains(conn):
try:
i=1
for domain in conn.identity.domains():
- print('Domain',str(i),'\n',domain,'n')
+ print('Domain',str(i),'\n',domain,'\n')
i+=1
except Exception as e:
print("Exception:",type(e), e)
@@ -138,14 +151,17 @@ def os_list_domains(conn):
-
-
+
+
def gdtest_openstack():
- # Method 1: assume there is a clouds.yaml file in PATH, starting path search with local directory
+
+ # Method 1 (preferred) : assume there is a clouds.yaml file in PATH, starting path search with local directory
#conn = openstack.connect(cloud='armopenstack', region_name='RegionOne')
- #conn = openstack.connect(cloud='hpe16openstack', region_name='RegionOne')
- # getting error: AttributeError: module 'openstack' has no attribute 'connect'
+ #conn = openstack.connect(cloud='hpe16openstackEuphrates', region_name='RegionOne')
+ conn = openstack.connect(cloud='hpe16openstackFraser', region_name='RegionOne')
+ # if getting error: AttributeError: module 'openstack' has no attribute 'connect', check that openstack is installed for this python version
+
# Method 2: pass arguments directly, all as strings
# see details at https://docs.openstack.org/python-openstacksdk/latest/user/connection.html
@@ -163,19 +179,20 @@ def gdtest_openstack():
# password='opnfv_secret',
# region_name='RegionOne',
# )
- # getting error: AttributeError: module 'openstack' has no attribute 'connect'
+ # if getting error: AttributeError: module 'openstack' has no attribute 'connect', check that openstack is installed for this python version
+
# Method 3: create Connection object directly
- auth_args = {
- #'auth_url': 'https://10.10.50.103:5000/v2.0', # Arm
- #'auth_url': 'http://10.16.0.101:5000/v2.0', # hpe16, Euphrates
- 'auth_url': 'http://10.16.0.107:5000/v3', # hpe16, Fraser
- 'project_name': 'admin',
- 'username': 'admin',
- 'password': 'opnfv_secret',
- 'region_name': 'RegionOne',
- 'domain': 'Default'}
- conn = connection.Connection(**auth_args)
+ # auth_args = {
+ # #'auth_url': 'https://10.10.50.103:5000/v2.0', # Arm
+ # #'auth_url': 'http://10.16.0.101:5000/v2.0', # hpe16, Euphrates
+ # 'auth_url': 'http://10.16.0.107:5000/v3', # hpe16, Fraser
+ # 'project_name': 'admin',
+ # 'username': 'admin',
+ # 'password': 'opnfv_secret',
+ # 'region_name': 'RegionOne',
+ # 'domain': 'Default'}
+ # conn = connection.Connection(**auth_args)
#conn = connection.Connection(
#auth_url='http://10.16.0.107:5000/v3',
@@ -184,12 +201,65 @@ def gdtest_openstack():
#password='opnfv_secret')
- os_list_servers(conn)
- os_list_networks(conn)
- os_list_volumes(conn)
- os_list_users(conn)
- os_list_projects(conn)
- os_list_domains(conn)
+ openstack_list_servers(conn)
+ openstack_list_networks(conn)
+ openstack_list_volumes(conn)
+ openstack_list_users(conn)
+ openstack_list_projects(conn)
+ openstack_list_domains(conn)
+
+ # VM: hpe16-Auto-UC2-gdtest-compute1
+ gds_ID = '715c677a-7914-4ca8-8c6d-75bf29eeb940'
+ gds = conn.compute.get_server(gds_ID)
+ print('\ngds.name=',gds.name)
+ print('gds.status=',gds.status)
+ print('suspending...')
+ conn.compute.suspend_server(gds_ID) # NOT synchronous: returns before suspension action is completed
+ wait_seconds = 10
+ print(' waiting',wait_seconds,'seconds...')
+ time.sleep(wait_seconds)
+ gds = conn.compute.get_server(gds_ID) # need to refresh data; not maintained live
+ print('gds.status=',gds.status)
+ print('resuming...')
+ conn.compute.resume_server(gds_ID)
+ print(' waiting',wait_seconds,'seconds...')
+ time.sleep(wait_seconds)
+ gds = conn.compute.get_server(gds_ID) # need to refresh data; not maintained live
+ print('gds.status=',gds.status)
+
+
+
+ #VM: test3
+ gds_ID = 'd3ceffc3-5967-4f18-b8b5-b1b2bd7ab76d'
+ gds = conn.compute.get_server(gds_ID)
+ print('\ngds.name=',gds.name)
+ print('gds.status=',gds.status)
+ print('suspending...')
+ conn.compute.suspend_server(gds_ID) # NOT synchronous: returns before suspension action is completed
+ wait_seconds = 10
+ print(' waiting',wait_seconds,'seconds...')
+ time.sleep(wait_seconds)
+ gds = conn.compute.get_server(gds_ID) # need to refresh data; not maintained live
+ print('gds.status=',gds.status)
+ print('resuming...')
+ conn.compute.resume_server(gds_ID)
+ print(' waiting',wait_seconds,'seconds...')
+ time.sleep(wait_seconds)
+ gds = conn.compute.get_server(gds_ID) # need to refresh data; not maintained live
+ print('gds.status=',gds.status)
+
+ #Volume: hpe16-Auto-UC2-gdtest-volume1
+ gdv_ID = '5a6c1dbd-5097-4a9b-8f79-6f03cde18bf6'
+ gdv = conn.block_storage.get_volume(gdv_ID)
+ # no API for stopping/restarting a volume... only delete. ONAP would have to completely migrate a VNF depending on this volume
+ print('\ngdv.name=',gdv.name)
+ print('gdv.status=',gdv.status)
+ #gdv_recreate = gdv
+ #print('deleting...')
+ #conn.block_storage.delete_volume(gdv_ID)
+ #conn.block_storage.delete_volume(gdv)
+ #print('recreating...')
+ #gdv = conn.block_storage.create_volume(<attributes saved in gdv_recreate>)
# get_server(server): Get a single Server
@@ -211,7 +281,7 @@ def main():
gdtest_openstack()
- print("Ciao\n")
+ print("\nCiao\n")
if __name__ == "__main__":
main()
diff --git a/lib/auto/testcase/resiliency/AutoResilMain.py b/lib/auto/testcase/resiliency/AutoResilMain.py
index 2f67bdf..1d21f6a 100644
--- a/lib/auto/testcase/resiliency/AutoResilMain.py
+++ b/lib/auto/testcase/resiliency/AutoResilMain.py
@@ -164,7 +164,6 @@ def main():
print("Problem with test definition: empty")
sys.exit() # stop entire program, because test definition MUST be correct
else:
- # TODO run test: call selected test definition run_test_code() method
test_def = get_indexed_item_from_list(selected_test_def_ID, AutoResilGlobal.test_definition_list)
if test_def != None:
test_def.run_test_code()
diff --git a/lib/auto/testcase/resiliency/AutoResilMgTestDef.py b/lib/auto/testcase/resiliency/AutoResilMgTestDef.py
index 9667f93..7e0b50d 100644
--- a/lib/auto/testcase/resiliency/AutoResilMgTestDef.py
+++ b/lib/auto/testcase/resiliency/AutoResilMgTestDef.py
@@ -320,10 +320,62 @@ class TestDefinition(AutoBaseObject):
def run_test_code(self):
- """Run currently selected test code."""
+ """Run currently selected test code. Common code runs here, specific code is invoked through test_code_list and test_code_ID."""
try:
+ # here, trigger start code from challenge def (to simulate VM failure), manage Recovery time measurement,
+ # specific monitoring of VNF, trigger stop code from challenge def
+
+ time1 = datetime.now() # get time as soon as execution starts
+
+ # create challenge execution instance
+ chall_exec_ID = 1 # ideally, would be incremented, but need to maintain a number of challenge executions somewhere. or could be random.
+ chall_exec_name = 'challenge execution' # challenge def ID is already passed
+ chall_exec_challDefID = self.challenge_def_ID
+ chall_exec = ChallengeExecution(chall_exec_ID, chall_exec_name, chall_exec_challDefID)
+ chall_exec.log.append_to_list('challenge execution created')
+
+ # create test execution instance
+ test_exec_ID = 1 # ideally, would be incremented, but need to maintain a number of text executions somewhere. or could be random.
+ test_exec_name = 'test execution' # test def ID is already passed
+ test_exec_testDefID = self.ID
+ test_exec_userID = '' # or get user name from getpass module: import getpass and test_exec_userID = getpass.getuser()
+ test_exec = TestExecution(test_exec_ID, test_exec_name, test_exec_testDefID, chall_exec_ID, test_exec_userID)
+ test_exec.log.append_to_list('test execution created')
+
+ # get time1 before anything else, so the setup time is counted
+ test_exec.start_time = time1
+
+ # get challenge definition instance, and start challenge
+ challenge_def = get_indexed_item_from_list(self.challenge_def_ID, AutoResilGlobal.challenge_definition_list)
+ challenge_def.run_start_challenge_code()
+
+ # memorize challenge start time
+ chall_exec.start_time = datetime.now()
+ test_exec.challenge_start_time = chall_exec.start_time
+
+ # call specific test definition code, via table of functions; this code should monitor a VNF and return when restoration is observed
test_code_index = self.test_code_ID - 1 # lists are indexed from 0 to N-1
- self.test_code_list[test_code_index]() # invoke corresponding method, via index
+ self.test_code_list[test_code_index]() # invoke corresponding method, via index; could check for return code
+
+ # memorize restoration detection time and compute recovery time
+ test_exec.restoration_detection_time = datetime.now()
+ recovery_time_metric_def = get_indexed_item_from_file(1,FILE_METRIC_DEFINITIONS) # get Recovery Time metric definition: ID=1
+ test_exec.recovery_time = recovery_time_metric_def.compute(test_exec.challenge_start_time, test_exec.restoration_detection_time)
+
+ # stop challenge
+ challenge_def.run_stop_challenge_code()
+
+ # memorize challenge stop time
+ chall_exec.stop_time = datetime.now()
+ chall_exec.log.append_to_list('challenge execution finished')
+
+ # write results to CSV files, memorize test finish time
+ chall_exec.write_to_csv()
+ test_exec.finish_time = datetime.now()
+ test_exec.log.append_to_list('test execution finished')
+ test_exec.write_to_csv()
+
+
except Exception as e:
print(type(e), e)
sys.exit()
@@ -350,13 +402,10 @@ class TestDefinition(AutoBaseObject):
"""Test case code number 005."""
print("This is test_code005 from TestDefinition #", self.ID, ", test case #", self.test_case_ID, sep='')
- # here, trigger start code from challenge def (to simulate VM failure), manage Recovery time measurement,
- # monitoring of VNF, trigger stop code from challenge def, perform restoration of VNF
- challenge_def = get_indexed_item_from_list(self.challenge_def_ID, AutoResilGlobal.challenge_definition_list)
- if challenge_def != None:
- challenge_def.run_start_challenge_code()
- challenge_def.run_stop_challenge_code()
-
+ # specific VNF recovery monitoring, specific metrics if any
+ # interact with ONAP, periodic query about VNF status; may also check VM or container status directly with VIM
+ # return when VNF is recovered
+ # may provision for failure to recover (max time to wait; return code: recovery OK boolean)
def test_code006(self):
"""Test case code number 006."""
@@ -437,9 +486,9 @@ def init_test_definitions():
test_definitions = []
# add info to list in memory, one by one, following signature values
- test_def_ID = 1
+ test_def_ID = 5
test_def_name = "VM failure impact on virtual firewall (vFW VNF)"
- test_def_challengeDefID = 1
+ test_def_challengeDefID = 5
test_def_testCaseID = 5
test_def_VNFIDs = [1]
test_def_associatedMetricsIDs = [2]
@@ -466,14 +515,20 @@ def init_test_definitions():
######################################################################
class ChallengeType(Enum):
- # server-level failures
+ # physical server-level failures 1XX
COMPUTE_HOST_FAILURE = 100
DISK_FAILURE = 101
LINK_FAILURE = 102
NIC_FAILURE = 103
- # network-level failures
- OVS_BRIDGE_FAILURE = 200
- # security stresses
+
+ # cloud-level failures 2XX
+ CLOUD_COMPUTE_FAILURE = 200
+ SDN_C_FAILURE = 201
+ OVS_BRIDGE_FAILURE = 202
+ CLOUD_STORAGE_FAILURE = 203
+ CLOUD_NETWORK_FAILURE = 204
+
+ # security stresses 3XX
HOST_TAMPERING = 300
HOST_INTRUSION = 301
NETWORK_INTRUSION = 302
@@ -619,9 +674,26 @@ class ChallengeDefinition(AutoBaseObject):
def start_challenge_code005(self):
"""Start Challenge code number 005."""
print("This is start_challenge_code005 from ChallengeDefinition #",self.ID, sep='')
+ # challenge #5, related to test case #5, i.e. test def #5
+ # cloud reference (name and region) should be in clouds.yaml file
+ # conn = openstack.connect(cloud='cloudNameForChallenge005', region_name='regionNameForChallenge005')
+ # TestDef knows VNF, gets VNF->VM mapping from ONAP, passes VM ref to ChallengeDef
+ # ChallengeDef suspends/resumes VM
+ # conn.compute.servers() to get list of servers, using VM ID, check server.id and/or server.name
+ # conn.compute.suspend_server(this server id)
+
+
def stop_challenge_code005(self):
"""Stop Challenge code number 005."""
print("This is stop_challenge_code005 from ChallengeDefinition #",self.ID, sep='')
+ # challenge #5, related to test case #5, i.e. test def #5
+ # cloud reference (name and region) should be in clouds.yaml file
+ # conn = openstack.connect(cloud='cloudNameForChallenge005', region_name='regionNameForChallenge005')
+ # TestDef knows VNF, gets VNF->VM mapping from ONAP, passes VM ref to ChallengeDef
+ # ChallengeDef suspends/resumes VM
+ # conn.compute.servers() to get list of servers, using VM ID, check server.id and/or server.name
+ # conn.compute.conn.compute.resume_server(this server id)
+
def start_challenge_code006(self):
"""Start Challenge code number 006."""
@@ -711,9 +783,9 @@ def init_challenge_definitions():
challenge_defs = []
# add info to list in memory, one by one, following signature values
- chall_def_ID = 1
+ chall_def_ID = 5
chall_def_name = "VM failure"
- chall_def_challengeType = ChallengeType.COMPUTE_HOST_FAILURE
+ chall_def_challengeType = ChallengeType.CLOUD_COMPUTE_FAILURE
chall_def_recipientID = 1
chall_def_impactedCloudResourcesInfo = "OpenStack VM on ctl02 in Arm pod"
chall_def_impactedCloudResourceIDs = [2]
@@ -722,8 +794,10 @@ def init_challenge_definitions():
chall_def_startChallengeCLICommandSent = "service nova-compute stop"
chall_def_stopChallengeCLICommandSent = "service nova-compute restart"
# OpenStack VM Suspend vs. Pause: suspend stores the state of VM on disk while pause stores it in memory (RAM)
+ # in CLI:
# $ nova suspend NAME
# $ nova resume NAME
+ # but better use openstack SDK
chall_def_startChallengeAPICommandSent = []
chall_def_stopChallengeAPICommandSent = []
@@ -1575,7 +1649,7 @@ def main():
challgs = init_challenge_definitions()
print(challgs)
- chall = get_indexed_item_from_file(1,FILE_CHALLENGE_DEFINITIONS)
+ chall = get_indexed_item_from_file(5,FILE_CHALLENGE_DEFINITIONS)
print(chall)
chall.run_start_challenge_code()
chall.run_stop_challenge_code()
@@ -1584,7 +1658,7 @@ def main():
tds = init_test_definitions()
print(tds)
- td = get_indexed_item_from_file(1,FILE_TEST_DEFINITIONS)
+ td = get_indexed_item_from_file(5,FILE_TEST_DEFINITIONS)
print(td)
#td.printout_all(0)
#td.run_test_code()
@@ -1604,8 +1678,8 @@ def main():
metricdef = get_indexed_item_from_file(1,FILE_METRIC_DEFINITIONS)
print(metricdef)
- t1 = datetime(2018,4,1,15,10,12,500000)
- t2 = datetime(2018,4,1,15,13,43,200000)
+ t1 = datetime(2018,7,1,15,10,12,500000)
+ t2 = datetime(2018,7,1,15,13,43,200000)
r1 = metricdef.compute(t1,t2)
print(r1)
print()
@@ -1646,7 +1720,7 @@ def main():
print()
- ce1 = ChallengeExecution(1,"essai challenge execution",1)
+ ce1 = ChallengeExecution(1,"essai challenge execution",5)
ce1.start_time = datetime.now()
ce1.log.append_to_list("challenge execution log event 1")
ce1.log.append_to_list("challenge execution log event 2")
@@ -1668,7 +1742,7 @@ def main():
print()
- te1 = TestExecution(1,"essai test execution",1,1,"Gerard")
+ te1 = TestExecution(1,"essai test execution",5,1,"Gerard")
te1.start_time = datetime.now()
te1.challenge_start_time = ce1.start_time # illustrate how to set test execution challenge start time
print("te1.challenge_start_time:",te1.challenge_start_time)
diff --git a/lib/auto/testcase/resiliency/clouds.yaml b/lib/auto/testcase/resiliency/clouds.yaml
index 593a07c..e6ec824 100644
--- a/lib/auto/testcase/resiliency/clouds.yaml
+++ b/lib/auto/testcase/resiliency/clouds.yaml
@@ -14,9 +14,9 @@ clouds:
armopenstack:
auth:
auth_url: https://10.10.50.103:5000/v2.0
+ project_name: admin
username: admin
password: opnfv_secret
- project_name: admin
region_name: RegionOne
# Openstack instance on LaaS hpe16, from OPNFV Euphrates, controller IP@ (mgt: 172.16.10.101; public: 10.16.0.101)
@@ -27,9 +27,9 @@ clouds:
hpe16openstackEuphrates:
auth:
auth_url: http://10.16.0.101:5000/v2.0
+ project_name: admin
username: admin
password: opnfv_secret
- project_name: admin
region_name: RegionOne
# Openstack instance on LaaS hpe16, from OPNFV Fraser, controller IP@ (mgt: 172.16.10.36; public: 10.16.0.107)
@@ -37,12 +37,16 @@ clouds:
# admin: http://172.16.10.36:35357/v3
# internal: http://172.16.10.36:5000/v3
# public: http://10.16.0.107:5000/v3
+ # Horizon: https://10.16.0.107:8078, but need SSH port forwarding through 10.10.100.26 to be reached from outside
+ # "If you are using Identity v3 you need to specify the user and the project domain name"
hpe16openstackFraser:
auth:
auth_url: http://10.16.0.107:5000/v3
+ project_name: admin
username: admin
password: opnfv_secret
- project_name: admin
+ user_domain_name: Default
+ project_domain_name: Default
region_name: RegionOne
# ubuntu@ctl01:~$ openstack project show admin
@@ -78,14 +82,28 @@ clouds:
# | name | heat_user_domain |
# +-------------+---------------------------------------------+
-export OS_AUTH_URL=http://10.16.0.107:5000/v3
-export OS_PROJECT_ID=04fcfe7aa83f4df79ae39ca748aa8637
-export OS_PROJECT_NAME="admin"
-export OS_USER_DOMAIN_NAME="Default"
-export OS_USERNAME="admin"
-export OS_PASSWORD="opnfv_secret"
-export OS_REGION_NAME="RegionOne"
-export OS_INTERFACE=public
-export OS_IDENTITY_API_VERSION=3
+# admin user (from Horizon on hpe16):
+# Domain ID default
+# Domain Name Default
+# User Name admin
+# Description None
+# ID df0ea50cfcff4bbfbfdfefccdb018834
+# Email root@localhost
+# Enabled Yes
+# Primary Project ID 04fcfe7aa83f4df79ae39ca748aa8637
+# Primary Project Name admin
+
+
+
+
+# export OS_AUTH_URL=http://10.16.0.107:5000/v3
+# export OS_PROJECT_ID=04fcfe7aa83f4df79ae39ca748aa8637
+# export OS_PROJECT_NAME="admin"
+# export OS_USER_DOMAIN_NAME="Default"
+# export OS_USERNAME="admin"
+# export OS_PASSWORD="opnfv_secret"
+# export OS_REGION_NAME="RegionOne"
+# export OS_INTERFACE=public
+# export OS_IDENTITY_API_VERSION=3