diff options
author | Emma Foley <emma.l.foley@intel.com> | 2018-09-09 13:21:44 +0100 |
---|---|---|
committer | Emma Foley <emma.l.foley@intel.com> | 2019-01-22 10:11:27 +0000 |
commit | 7f52cb4cf74610a9fe3ba4440c5c657cb5112275 (patch) | |
tree | cfb5282d2830342eb46bf6798d8def9eb838f1a1 /docs/testing/user/userguide/12-nsb-overview.rst | |
parent | fa6509b1bde3d7c2fecad3ffab5a050a27346dae (diff) |
[docs][userguide] Add content and comments to ch12
JIRA: YARDSTICK-1335
Change-Id: Ibab629b44a8daeebab95fe7eee056b6403cd98c1
Signed-off-by: Emma Foley <emma.l.foley@intel.com>
Diffstat (limited to 'docs/testing/user/userguide/12-nsb-overview.rst')
-rw-r--r-- | docs/testing/user/userguide/12-nsb-overview.rst | 41 |
1 files changed, 40 insertions, 1 deletions
diff --git a/docs/testing/user/userguide/12-nsb-overview.rst b/docs/testing/user/userguide/12-nsb-overview.rst index ec4df1cae..70aba1e37 100644 --- a/docs/testing/user/userguide/12-nsb-overview.rst +++ b/docs/testing/user/userguide/12-nsb-overview.rst @@ -56,7 +56,7 @@ NSB extension includes: * Generic data models of Network Services, based on ETSI spec `ETSI GS NFV-TST 001`_ -* Standalone :term:`context` for VNF testing with SRIOV, OVS, OVS-DPDK, etc +* Standalone :term:`context` for VNF testing SRIOV, OVS, OVS-DPDK, etc * Generic VNF configuration models and metrics implemented with Python classes * Traffic generator features and traffic profiles @@ -121,6 +121,13 @@ Network Service framework performs the necessary test steps. It may involve: Components of Network Service ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +.. TODO: provide a list of components in this section and describe them in + later sub-sections + +.. Components are the methodology, TGs, framework extensions, KPI collection, + Testcases, SampleVNFs +.. Framework extentions include: VNF models, NSPerf Scenario, contexts + * *Models for Network Service benchmarking*: The Network Service benchmarking requires the proper modelling approach. The NSB provides models using Python files and defining of NSDs and VNFDs. @@ -169,6 +176,38 @@ for every combination of test case parameters: * RFC2544 throughput for various loss rate defined (1% is a default) +KPI Collection +^^^^^^^^^^^^^^ + +KPI collection is the process of sampling KPIs at multiple intervals to allow +for investigation into anomalies during runtime. Some KPI intervals are +adjustable. KPIs are collected from traffic generators and NFVI for the SUT. +There is already some reporting in NSB available, but NSB collects all KPIs for +analytics to process. + +Below is an example list of basic KPIs: +* Throughput +* Latency +* Packet delay variation +* Maximum establishment rate +* Maximum tear-down rate +* Maximum simultaneous number of sessions + +Of course, there can be many other KPIs that will be relevant for a specific +NFVI, but in most cases these KPIs are enough to give you a basic picture of +the SUT. NSB also uses :term:`collectd` in order to collect the KPIs. Currently +the following collectd plug-ins are enabled for NSB testcases: + +* Libvirt +* Interface stats +* OvS events +* vSwitch stats +* Huge Pages +* RAM +* CPU usage +* IntelĀ® PMU +* Intel(r) RDT + Graphical Overview ------------------ |