aboutsummaryrefslogtreecommitdiffstats
path: root/docs/release/userguide/UC02-feature.userguide.rst
diff options
context:
space:
mode:
Diffstat (limited to 'docs/release/userguide/UC02-feature.userguide.rst')
-rw-r--r--docs/release/userguide/UC02-feature.userguide.rst49
1 files changed, 33 insertions, 16 deletions
diff --git a/docs/release/userguide/UC02-feature.userguide.rst b/docs/release/userguide/UC02-feature.userguide.rst
index 0ecb7de..3ed5781 100644
--- a/docs/release/userguide/UC02-feature.userguide.rst
+++ b/docs/release/userguide/UC02-feature.userguide.rst
@@ -8,7 +8,8 @@
Auto User Guide: Use Case 2 Resiliency Improvements Through ONAP
================================================================
-This document provides the user guide for Fraser release of Auto, specifically for Use Case 2: Resiliency Improvements Through ONAP.
+This document provides the user guide for Fraser release of Auto,
+specifically for Use Case 2: Resiliency Improvements Through ONAP.
Description
@@ -22,6 +23,8 @@ This use case illustrates VNF failure recovery time reduction with ONAP, thanks
The benefit for NFV edge service providers is to assess what degree of added VIM+NFVI platform resilience for VNFs is obtained by leveraging ONAP closed-loop control, vs. VIM+NFVI self-managed resilience (which may not be aware of the VNF or the corresponding end-to-end Service, but only of underlying resources such as VMs and servers).
+Also, a problem, or challenge, may not necessarily be a failure (which could also be recovered by other layers): it could be an issue leading to suboptimal performance, without failure. A VNF management layer as provided by ONAP may detect such non-failure problems, and provide a recovery solution which no other layer could provide in a given deployment.
+
Preconditions:
@@ -36,7 +39,7 @@ Different types of problems can be simulated, hence the identification of multip
.. image:: auto-UC02-testcases.jpg
-Description of simulated problems/challenges:
+Description of simulated problems/challenges, leading to various test cases:
* Physical Infra Failure
@@ -60,7 +63,6 @@ Description of simulated problems/challenges:
-
Test execution high-level description
=====================================
@@ -76,7 +78,7 @@ The second MSC illustrates the pattern of all test cases for the Resiliency Impr
* simulate the chosen problem (a.k.a. a "Challenge") for this test case, for example suspend a VM which may be used by a VNF
* start tracking the target VNF of this test case
* measure the ONAP-orchestrated VNF Recovery Time
-* then the test stops simulating the problem (for example: resume the VM that was suspended),
+* then the test stops simulating the problem (for example: resume the VM that was suspended)
In parallel, the MSC also shows the sequence of events happening in ONAP, thanks to its configuration to provide Service Assurance for the VNF.
@@ -86,21 +88,21 @@ In parallel, the MSC also shows the sequence of events happening in ONAP, thanks
Test design: data model, implementation modules
===============================================
-The high-level design of classes identifies several entities:
+The high-level design of classes identifies several entities, described as follows:
-* Test Case: as identified above, each is a special case of the overall use case (e.g., categorized by challenge type)
-* Test Definition: gathers all the information necessary to run a certain test case
-* Metric Definition: describes a certain metric that may be measured, in addition to Recovery Time
-* Challenge Definition: describe the challenge (problem, failure, stress, ...) simulated by the test case
-* Recipient: entity that can receive commands and send responses, and that is queried by the Test Definition or Challenge Definition (a recipient would be typically a management service, with interfaces (CLI or API) for clients to query)
-* Resources: with 3 types (VNF, cloud virtual resource such as a VM, physical resource such as a server)
+* ``Test Case`` : as identified above, each is a special case of the overall use case (e.g., categorized by challenge type)
+* ``Test Definition`` : gathers all the information necessary to run a certain test case
+* ``Metric Definition`` : describes a certain metric that may be measured for a Test Case, in addition to Recovery Time
+* ``Challenge Definition`` : describe the challenge (problem, failure, stress, ...) simulated by the test case
+* ``Recipient`` : entity that can receive commands and send responses, and that is queried by the Test Definition or Challenge Definition (a recipient would be typically a management service, with interfaces (CLI or API) for clients to query)
+* ``Resources`` : with 3 types (VNF, cloud virtual resource such as a VM, physical resource such as a server)
Three of these entities have execution-time corresponding classes:
-* Test Execution, which captures all the relevant data of the execution of a Test Definition
-* Challenge Execution, which captures all the relevant data of the execution of a Challenge Definition
-* Metric Value, which captures the a quantitative measurement of a Metric Definition (with a timestamp)
+* ``Test Execution`` , which captures all the relevant data of the execution of a Test Definition
+* ``Challenge Execution`` , which captures all the relevant data of the execution of a Challenge Definition
+* ``Metric Value`` , which captures the quantitative measurement of a Metric Definition (with a timestamp)
.. image:: auto-UC02-data1.jpg
@@ -122,13 +124,28 @@ The module design is straightforward: functions and classes for managing data, f
.. image:: auto-UC02-module1.jpg
-This last diagram shows the test user menu functions:
+This last diagram shows the test user menu functions, when used interactively:
.. image:: auto-UC02-module2.jpg
-In future releases of Auto, testing environments such as FuncTest and Yardstick might be leveraged.
+In future releases of Auto, testing environments such as Robot, FuncTest and Yardstick might be leveraged. Use Case code will then be invoked by API, not by a CLI interaction.
Also, anonymized test results could be collected from users willing to share them, and aggregates could be
maintained as benchmarks.
+As further illustration, the next figure shows cardinalities of class instances: one Test Definition per Test Case, multiple Test Executions per Test Definition, zero or one Recovery Time Metric Value per Test Execution (zero if the test failed for any reason, including if ONAP failed to recover the challenge), etc.
+
+.. image:: auto-UC02-cardinalities.png
+
+
+In this particular implementation, both Test Definition and Challenge Definition classes have a generic execution method (e.g., ``run_test_code()`` for Test Definition) which can invoke a particular script, by way of an ID (which can be configured, and serves as a script selector for each Test Definition instance). The overall test execution logic between classes is show in the next figure.
+
+.. image:: auto-UC02-logic.png
+
+The execution of a test case starts with invoking the generic method from Test Definition, which then creates Execution instances, invokes Challenge Definition methods, performs the Recovery time calculation, performs script-specific actions, and writes results to the CSV files.
+
+Finally, the following diagram show a mapping between these class instances and the initial test case design. It corresponds to the test case which simulates a VM failure, and shows how the OpenStack SDK API is invoked (with a connection object) by the Challenge Definition methods, to suspend and resume a VM.
+
+.. image:: auto-UC02-TC-mapping.png
+