aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorVolodymyr Mytnyk <volodymyrx.mytnyk@intel.com>2019-01-22 12:04:56 +0000
committerGerrit Code Review <gerrit@opnfv.org>2019-01-22 12:04:56 +0000
commitfff1a029c5e1b57f2c8bc234aa1aeaaa996c6235 (patch)
treeee71fadd3d2574545988795e10dceb4f51c657bd
parent51fcd65a47991a62235f32b63c0b4f732551fca5 (diff)
parent7f52cb4cf74610a9fe3ba4440c5c657cb5112275 (diff)
Merge "[docs][userguide] Add content and comments to ch12"
-rw-r--r--docs/testing/user/userguide/12-nsb-overview.rst41
1 files changed, 40 insertions, 1 deletions
diff --git a/docs/testing/user/userguide/12-nsb-overview.rst b/docs/testing/user/userguide/12-nsb-overview.rst
index ec4df1cae..70aba1e37 100644
--- a/docs/testing/user/userguide/12-nsb-overview.rst
+++ b/docs/testing/user/userguide/12-nsb-overview.rst
@@ -56,7 +56,7 @@ NSB extension includes:
* Generic data models of Network Services, based on ETSI spec
`ETSI GS NFV-TST 001`_
-* Standalone :term:`context` for VNF testing with SRIOV, OVS, OVS-DPDK, etc
+* Standalone :term:`context` for VNF testing SRIOV, OVS, OVS-DPDK, etc
* Generic VNF configuration models and metrics implemented with Python
classes
* Traffic generator features and traffic profiles
@@ -121,6 +121,13 @@ Network Service framework performs the necessary test steps. It may involve:
Components of Network Service
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+.. TODO: provide a list of components in this section and describe them in
+ later sub-sections
+
+.. Components are the methodology, TGs, framework extensions, KPI collection,
+ Testcases, SampleVNFs
+.. Framework extentions include: VNF models, NSPerf Scenario, contexts
+
* *Models for Network Service benchmarking*: The Network Service benchmarking
requires the proper modelling approach. The NSB provides models using Python
files and defining of NSDs and VNFDs.
@@ -169,6 +176,38 @@ for every combination of test case parameters:
* RFC2544 throughput for various loss rate defined (1% is a default)
+KPI Collection
+^^^^^^^^^^^^^^
+
+KPI collection is the process of sampling KPIs at multiple intervals to allow
+for investigation into anomalies during runtime. Some KPI intervals are
+adjustable. KPIs are collected from traffic generators and NFVI for the SUT.
+There is already some reporting in NSB available, but NSB collects all KPIs for
+analytics to process.
+
+Below is an example list of basic KPIs:
+* Throughput
+* Latency
+* Packet delay variation
+* Maximum establishment rate
+* Maximum tear-down rate
+* Maximum simultaneous number of sessions
+
+Of course, there can be many other KPIs that will be relevant for a specific
+NFVI, but in most cases these KPIs are enough to give you a basic picture of
+the SUT. NSB also uses :term:`collectd` in order to collect the KPIs. Currently
+the following collectd plug-ins are enabled for NSB testcases:
+
+* Libvirt
+* Interface stats
+* OvS events
+* vSwitch stats
+* Huge Pages
+* RAM
+* CPU usage
+* IntelĀ® PMU
+* Intel(r) RDT
+
Graphical Overview
------------------