diff options
author | Shravani <shravani.p@tcs.com> | 2017-03-07 18:14:05 +0530 |
---|---|---|
committer | Shravani Paladugula <shravani.p@tcs.com> | 2017-03-07 14:57:17 +0000 |
commit | 7ca470b86711f5f1abaa439de4ea0626f5849b0b (patch) | |
tree | 8e341314b4ee94873de6c4cd1ba85b0eda3a8601 /docs/userguide | |
parent | 0c2f88aa2ccffc538c276caac88da6841107bf81 (diff) |
This patch contains updated documentation for Dashboard,packet
forwarding,pcm utility,Ftrace and Scenario testing.
Change-Id: I677faeed6e4c78f30d486701364ca15a1507b1ef
Signed-off-by: Shravani <shravani.p@tcs.com>
Co-Authored by:Srinivas <srinivas.atmakuri@tcs.com>
Co-Authored by:RajithaY<rajithax.yerrumsetty@intel.com>
Co-Authored by:Gundarapu Kalyan Reddy <reddyx.gundarapu@intel.com>
Co-Authored by:Navya Bathula <navyax.bathula@intel.com>
Diffstat (limited to 'docs/userguide')
18 files changed, 1245 insertions, 1 deletions
diff --git a/docs/userguide/Ftrace.debugging.tool.userguide.rst b/docs/userguide/Ftrace.debugging.tool.userguide.rst new file mode 100644 index 000000000..0fcbbcf93 --- /dev/null +++ b/docs/userguide/Ftrace.debugging.tool.userguide.rst @@ -0,0 +1,257 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. + +.. http://creativecommons.org/licenses/by/4.0 + +===================== +FTrace Debugging Tool +===================== + +About Ftrace +------------- +Ftrace is an internal tracer designed to find what is going on inside the kernel. It can be used +for debugging or analyzing latencies and performance issues that take place outside of user-space. +Although ftrace is typically considered the function tracer, it is really a frame work of several +assorted tracing utilities. + + One of the most common uses of ftrace is the event tracing. + +**Note:** +- For KVMFORNFV, Ftrace is preferred as it is in-built kernel tool +- More stable compared to other debugging tools + +Version Features +---------------- + ++-----------------------------+-----------------------------------------------+ +| | | +| **Release** | **Features** | +| | | ++=============================+===============================================+ +| | - Ftrace Debugging tool is not implemented in | +| Colorado | Colorado release of KVMFORNFV | +| | | ++-----------------------------+-----------------------------------------------+ +| | - Ftrace aids in debugging the KVMFORNFV | +| Danube | 4.4-linux-kernel level issues | +| | - Option to diable if not required | ++-----------------------------+-----------------------------------------------+ + + +Implementation of Ftrace +------------------------- +Ftrace uses the debugfs file system to hold the control files as +well as the files to display output. + +When debugfs is configured into the kernel (which selecting any ftrace +option will do) the directory /sys/kernel/debug will be created. To mount +this directory, you can add to your /etc/fstab file: + +.. code:: bash + + debugfs /sys/kernel/debug debugfs defaults 0 0 + +Or you can mount it at run time with: + +.. code:: bash + + mount -t debugfs nodev /sys/kernel/debug + +Some configurations for Ftrace are used for other purposes, like finding latency or analyzing the system. For the purpose of debugging, the kernel configuration parameters that should be enabled are: + +.. code:: bash + + CONFIG_FUNCTION_TRACER=y + CONFIG_FUNCTION_GRAPH_TRACER=y + CONFIG_STACK_TRACER=y + CONFIG_DYNAMIC_FTRACE=y + +The above parameters must be enabled in /boot/config-4.4.0-el7.x86_64 i.e., kernel config file for ftrace to work. If not enabled, change the parameter to ``y`` and run., + +.. code:: bash + + On CentOS + grub2-mkconfig -o /boot/grub2/grub.cfg + sudo reboot + +Re-check the parameters after reboot before running ftrace. + +Files in Ftrace: +---------------- +The below is a list of few major files in Ftrace. + + ``current_tracer:`` + + This is used to set or display the current tracer that is configured. + + ``available_tracers:`` + + This holds the different types of tracers that have been compiled into the kernel. The tracers listed here can be configured by echoing their name into current_tracer. + + ``tracing_on:`` + + This sets or displays whether writing to the tracering buffer is enabled. Echo 0 into this file to disable the tracer or 1 to enable it. + + ``trace:`` + + This file holds the output of the trace in a human readable format. + + ``tracing_cpumask:`` + + This is a mask that lets the user only trace on specified CPUs. The format is a hex string representing the CPUs. + + ``events:`` + + It holds event tracepoints (also known as static tracepoints) that have been compiled into the kernel. It shows what event tracepoints exist and how they are grouped by system. + + +Avaliable Tracers +----------------- + +Here is the list of current tracers that may be configured based on usage. + +- function +- function_graph +- irqsoff +- preemptoff +- preemptirqsoff +- wakeup +- wakeup_rt + +Brief about a few: + + ``function:`` + + Function call tracer to trace all kernel functions. + + ``function_graph:`` + + Similar to the function tracer except that the function tracer probes the functions on their entry whereas the function graph tracer traces on both entry and exit of the functions. + + ``nop:`` + + This is the "trace nothing" tracer. To remove tracers from tracing simply echo "nop" into current_tracer. + +Examples: + +.. code:: bash + + + To list available tracers: + [tracing]# cat available_tracers + function_graph function wakeup wakeup_rt preemptoff irqsoff preemptirqsoff nop + + Usage: + [tracing]# echo function > current_tracer + [tracing]# cat current_tracer + function + + To view output: + [tracing]# cat trace | head -10 + + To Stop tracing: + [tracing]# echo 0 > tracing_on + + To Start/restart tracing: + [tracing]# echo 1 > tracing_on; + + +=================== +Ftrace in KVMFORNFV +=================== +Ftrace is part of KVMFORNFV D-Release. Kvmfornfv currently uses 4.4-linux-Kernel as part of +deployment and runs cyclictest for testing purpose generating latency values (max, min, avg values). +Ftrace (or) function tracer is a stable kernel inbuilt debugging tool which tests kernel in real +time and outputs a log as part of the code. These output logs are useful in following ways. + + - Kernel Debugging. + - Helps in Kernel code Optimization and + - Can be used to better understand the kernel Level code flow + - Log generation for each test run if enabled + - Choice of Disabling and Enabling + +Ftrace logs for KVMFORNFV can be found `here`_: + + +.. _here: http://artifacts.opnfv.org/kvmfornfv.html + +Ftrace Usage in KVMFORNFV Kernel Debugging: +------------------------------------------- +Kvmfornfv has two scripts in /ci/envs to provide ftrace tool: + + - enable_trace.sh + - disable_trace.sh + +Enabling Ftrace in KVMFORNFV +---------------------------- + +The enable_trace.sh script is triggered by changing ftrace_enable value in test_kvmfornfv.sh script which is zero by default. Change as below to enable Ftrace and trigger the script, + +.. code:: bash + + ftrace_enable=1 + +Note: + +- Ftrace is enabled before + +Details of enable_trace script +------------------------------ + +- CPU Coremask is calculated using getcpumask() +- All the required events are enabled by, + echoing "1" to $TRACEDIR/events/event_name/enable file + +Example, + +.. code:: bash + + $TRACEDIR = /sys/kernel/debug/tracing/ + sudo bash -c "echo 1 > $TRACEDIR/events/irq/enable" + sudo bash -c "echo 1 > $TRACEDIR/events/task/enable" + sudo bash -c "echo 1 > $TRACEDIR/events/syscalls/enable" + +The set_event file contains all the enabled events list + +- Function tracer is selected. May be changed to other avaliable tracers based on requirement + +.. code:: bash + + sudo bash -c "echo function > $TRACEDIR/current_tracer + +- When tracing is turned ON by setting ``tracing_on=1``, the ``trace`` file keeps getting append with the traced data until ``tracing_on=0`` and then ftrace_buffer gets cleared. + +.. code:: bash + + To Stop/Pause, + echo 0 >tracing_on; + + To Start/Restart, + echo 1 >tracing_on; + +- Once tracing is diabled, disable_trace.sh script is triggered. + +Details of Disable_trace Script +------------------------------- +In disable trace script the following are done: + +- The trace file is copied and moved to /tmp folfer based on timestamp. +- The current tracer file is set to ``nop`` +- The set_event file is cleared i.e., all the enabled events are disabled +- Kernel Ftarcer is diabled/unmounted + + +Publishing Ftrace logs: +----------------------- +The generated trace log is pushed to `artifacts`_ of Kvmfornfv project by releng team, which is done by a script in JJB of releng. The `trigger`_ in the script is., + +.. code:: bash + + echo "Uploading artifacts for future debugging needs...." + gsutil cp -r $WORKSPACE/build_output/log-*.tar.gz $GS_LOG_LOCATION > $WORKSPACE/gsutil.log 2>&1 + +.. _artifacts: https://artifacts.opnfv.org/ + +.. _trigger: https://gerrit.opnfv.org/gerrit/gitweb?p=releng.git;a=blob;f=jjb/kvmfornfv/kvmfornfv-upload-artifact.sh;h=56fb4f9c18a83c689a916dc6c85f9e3ddf2479b2;hb=HEAD#l53 + + +.. include:: pcm_utility.userguide.rst diff --git a/docs/userguide/images/Cpustress-Idle.png b/docs/userguide/images/Cpustress-Idle.png Binary files differnew file mode 100644 index 000000000..b4b4e1112 --- /dev/null +++ b/docs/userguide/images/Cpustress-Idle.png diff --git a/docs/userguide/images/Dashboard-screenshot-1.png b/docs/userguide/images/Dashboard-screenshot-1.png Binary files differnew file mode 100644 index 000000000..7ff809697 --- /dev/null +++ b/docs/userguide/images/Dashboard-screenshot-1.png diff --git a/docs/userguide/images/Dashboard-screenshot-2.png b/docs/userguide/images/Dashboard-screenshot-2.png Binary files differnew file mode 100644 index 000000000..a5c4e01b5 --- /dev/null +++ b/docs/userguide/images/Dashboard-screenshot-2.png diff --git a/docs/userguide/images/Guest_Scenario.png b/docs/userguide/images/Guest_Scenario.png Binary files differnew file mode 100644 index 000000000..550c0fe6f --- /dev/null +++ b/docs/userguide/images/Guest_Scenario.png diff --git a/docs/userguide/images/Host_Scenario.png b/docs/userguide/images/Host_Scenario.png Binary files differnew file mode 100644 index 000000000..89789aa7b --- /dev/null +++ b/docs/userguide/images/Host_Scenario.png diff --git a/docs/userguide/images/IOstress-Idle.png b/docs/userguide/images/IOstress-Idle.png Binary files differnew file mode 100644 index 000000000..fe4e5fc81 --- /dev/null +++ b/docs/userguide/images/IOstress-Idle.png diff --git a/docs/userguide/images/IXIA1.png b/docs/userguide/images/IXIA1.png Binary files differnew file mode 100644 index 000000000..682de7c57 --- /dev/null +++ b/docs/userguide/images/IXIA1.png diff --git a/docs/userguide/images/Idle-Idle.png b/docs/userguide/images/Idle-Idle.png Binary files differnew file mode 100644 index 000000000..d619f65ea --- /dev/null +++ b/docs/userguide/images/Idle-Idle.png diff --git a/docs/userguide/images/Memorystress-Idle.png b/docs/userguide/images/Memorystress-Idle.png Binary files differnew file mode 100644 index 000000000..b9974a7a2 --- /dev/null +++ b/docs/userguide/images/Memorystress-Idle.png diff --git a/docs/userguide/images/SRIOV_Scenario.png b/docs/userguide/images/SRIOV_Scenario.png Binary files differnew file mode 100644 index 000000000..62e116ada --- /dev/null +++ b/docs/userguide/images/SRIOV_Scenario.png diff --git a/docs/userguide/images/UseCaseDashboard.png b/docs/userguide/images/UseCaseDashboard.png Binary files differnew file mode 100644 index 000000000..9dd14d26e --- /dev/null +++ b/docs/userguide/images/UseCaseDashboard.png diff --git a/docs/userguide/images/dashboard-architecture.png b/docs/userguide/images/dashboard-architecture.png Binary files differnew file mode 100644 index 000000000..821484e74 --- /dev/null +++ b/docs/userguide/images/dashboard-architecture.png diff --git a/docs/userguide/index.rst b/docs/userguide/index.rst index ae0b380d4..fcef57250 100644 --- a/docs/userguide/index.rst +++ b/docs/userguide/index.rst @@ -7,12 +7,16 @@ KVMforNFV User Guide ******************** .. toctree:: - :maxdepth: 2 + :maxdepth: 3 ./abstract.rst ./introduction.rst ./common.platform.render.rst ./feature.userguide.render.rst + ./Ftrace.debugging.tool.userguide.rst + ./kvmfornfv.cyclictest-dashboard.userguide.rst ./low_latency.userguide.rst ./live_migration.userguide.rst + ./packet_forwarding.userguide.rst + ./pcm_utility.userguide.rst ./tuning.userguide.rst diff --git a/docs/userguide/kvmfornfv.cyclictest-dashboard.userguide.rst b/docs/userguide/kvmfornfv.cyclictest-dashboard.userguide.rst new file mode 100644 index 000000000..6333d0917 --- /dev/null +++ b/docs/userguide/kvmfornfv.cyclictest-dashboard.userguide.rst @@ -0,0 +1,258 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. + +.. http://creativecommons.org/licenses/by/4.0 + +======================================== +Dashboard for KVM4NFV Daily Test Results +======================================== + +Abstract +======== + +This chapter explains the procedure to configure the InfluxDB and Grafana on Node1 or Node2 +depending on the testtype to publish KVM4NFV cyclic test results. The cyclictest cases are executed +and results are published on Yardstick Dashboard(Graphana). InfluxDB is the database which will +store the cyclictest results and Grafana is a visualisation suite to view the maximum,minumum and +average values of the timeseries data of cyclictest results.The framework is shown in below image. + +.. Figure:: ../images/dashboard-architecture.png + + +Version Features +================ + ++-----------------------------+--------------------------------------------+ +| | | +| **Release** | **Features** | +| | | ++=============================+============================================+ +| | - Data published in Json file Format | +| Colorado | - No database support to store the test's | +| | latency values of cyclictest | +| | - For each run, the previous run's output | +| | file is replaced with a new file with | +| | currents latency values. | ++-----------------------------+--------------------------------------------+ +| | - Test results are stored in Influxdb | +| | - Graphical representation of the latency | +| Danube | values using Grafana suite. (Dashboard) | +| | - Supports Graphical view for multiple | +| | testcases and test-types (Stress/Idle) | ++-----------------------------+--------------------------------------------+ + + +Installation Steps: +=================== +To configure Yardstick, InfluxDB and Grafana for KVMFORNFV project following sequence of steps are followed: + +**Note:** + +All the below steps are done as per the script, which is a part of CICD integration. + +.. code:: bash + + For Yardstick: + git clone https://gerrit.opnfv.org/gerrit/yardstick + + For InfluxDB: + docker pull tutum/influxdb + docker run -d --name influxdb -p 8083:8083 -p 8086:8086 --expose 8090 --expose 8099 tutum/influxdb + docker exec -it influxdb bash + $influx + >CREATE USER root WITH PASSWORD 'root' WITH ALL PRIVILEGES + >CREATE DATABASE yardstick; + >use yardstick; + >show MEASUREMENTS; + + For Grafana: + docker pull grafana/grafana + docker run -d --name grafana -p 3000:3000 grafana/grafana + +The Yardstick document for Grafana and InfluxDB configuration can be found `here`_. + +.. _here: https://wiki.opnfv.org/display/yardstick/How+to+deploy+InfluxDB+and+Grafana+locally + +Configuring the Dispatcher Type: +================================ +Need to configure the dispatcher type in /etc/yardstick/yardstick.conf depending on the dispatcher +methods which are used to store the cyclictest results. A sample yardstick.conf can be found at +/yardstick/etc/yardstick.conf.sample, which can be copied to /etc/yardstick. + +.. code:: bash + + mkdir -p /etc/yardstick/ + cp /yardstick/etc/yardstick.conf.sample /etc/yardstick/yardstick.conf + +**Dispatcher Modules:** + +Three type of dispatcher methods are available to store the cyclictest results. + +- File +- InfluxDB +- HTTP + +**1. File**: Default Dispatcher module is file.If the dispatcher module is configured as a file,then the test results are stored in yardstick.out file. +( default path: /tmp/yardstick.out). +Dispatcher module of "Verify Job" is "Default".So,the results are stored in Yardstick.out file for verify job. Storing all the verify jobs in InfluxDB database causes redundancy of latency values. Hence, a File output format is prefered. + +.. code:: bash + + [DEFAULT] + debug = False + dispatcher = file + +**2. Influxdb**: If the dispatcher module is configured as influxdb, then the test results are stored in Influxdb.Users can check test results stored in the Influxdb(Database) on Grafana which is used to visualize the time series data. + +To configure the influxdb ,the following content in /etc/yardstick/yardstick.conf need to updated + +.. code:: bash + + [DEFAULT] + debug = False + dispatcher = influxdb + +Dispatcher module of "Daily Job" is Influxdb.So the results are stored in influxdb and then published to Dashboard. + +**3. HTTP**: If the dispatcher module is configured as http, users can check test result on OPNFV testing dashboard which uses MongoDB as backend. + +.. code:: bash + + [DEFAULT] + debug = False + dispatcher = http + +.. Figure:: ../images/UseCaseDashboard.png + + +Detailing the dispatcher module in verify and daily Jobs: +--------------------------------------------------------- + +KVM4NFV updates the dispatcher module in the yardstick configuration file(/etc/yardstick/yardstick.conf) depending on the Job type(Verify/Daily).Once the test is completed, results are published to the respective dispatcher modules. + +Dispatcher module is configured for each Job type as mentioned below. + +1. ``Verify Job`` : Default "DISPATCHER_TYPE" i.e. file(/tmp/yardstick.out) is used. User can also see the test results on Jenkins console log. + +.. code:: bash + + *"max": "00030", "avg": "00006", "min": "00006"* + +2. ``Daily Job`` : Opnfv Influxdb url is configured as dispatcher module. + +.. code:: bash + + DISPATCHER_TYPE=influxdb + DISPATCHER_INFLUXDB_TARGET="http://104.197.68.199:8086" + +Influxdb only supports line protocol, and the json protocol is deprecated. + +For example, the raw_result of cyclictest in json format is: + :: + + "benchmark": { + "timestamp": 1478234859.065317, + "errors": "", + "data": { + "max": "00012", + "avg": "00008", + "min": "00007" + }, + "sequence": 1 + }, + "runner_id": 23 + } + + +With the help of "influxdb_line_protocol", the json is transformed as a line string: + :: + + 'kvmfornfv_cyclictest_idle_idle,deploy_scenario=unknown,host=kvm.LF, + installer=unknown,pod_name=unknown,runner_id=23,scenarios=Cyclictest, + task_id=e7be7516-9eae-406e-84b6-e931866fa793,version=unknown + avg="00008",max="00012",min="00007" 1478234859065316864' + + + +Influxdb api which is already implemented in `Influxdb`_ will post the data in line format into the database. + +``Displaying Results on Grafana dashboard:`` + +- Once the test results are stored in Influxdb, dashboard configuration file(Json) which used to display the cyclictest results on Grafana need to be created by following the `Grafana-procedure`_ and then pushed into `yardstick-repo`_ + +- Grafana can be accessed at `Login`_ using credentials opnfv/opnfv and used for visualizing the collected test data as shown in `Visual`_\ + + +.. Figure:: ../images/Dashboard-screenshot-1.png + +.. Figure:: ../images/Dashboard-screenshot-2.png + +.. _Influxdb: https://git.opnfv.org/cgit/yardstick/tree/yardstick/dispatcher/influxdb.py + +.. _Visual: http://testresults.opnfv.org/grafana/dashboard/db/kvmfornfv-cyclictest + +.. _Login: http://testresults.opnfv.org/grafana/login + +.. _Grafana-procedure: https://wiki.opnfv.org/display/yardstick/How+to+work+with+grafana+dashboard + +.. _yardstick-repo: https://git.opnfv.org/cgit/yardstick/tree/dashboard/KVMFORNFV-Cyclictest + +.. _GrafanaDoc: http://docs.grafana.org/ + +Understanding Kvmfornfv Grafana Dashboard +========================================= + +The Kvmfornfv Dashboard found at http://testresults.opnfv.org/ currently supports graphical view of Cyclictest. For viewing Kvmfornfv Dashboard use, + +.. code:: bash + + http://testresults.opnfv.org/grafana/dashboard/db/kvmfornfv-cyclictest + + The login details are: + + Username: opnfv + Password: opnfv + +The Dashboard has four tables, each representing a specific test-type of cyclictest case, + +- Kvmfornfv_Cyclictest_Idle-Idle +- Kvmfornfv_Cyclictest_CPUstress-Idle +- Kvmfornfv_Cyclictest_Memorystress-Idle +- Kvmfornfv_Cyclictest_IOstress-Idle + +Note: + +- For all graphs, X-axis is marked with time stamps, Y-axis with value in microsecond units. + +**A brief about what each graph of the dashboard represents:** + +1. Idle-Idle Graph +------------------- +`Idle-Idle`_ graph displays the Average,Maximum and Minimum latency values obtained by running Idle_Idle test-type of the Cyclictest. Idle_Idle implies that no stress is applied on the Host or the Guest. + +.. _Idle-Idle: http://testresults.opnfv.org/grafana/dashboard/db/kvmfornfv-cyclictest?panelId=10&fullscreen + +.. Figure:: ../images/Idle-Idle.png + +2. CPU_Stress-Idle Graph +-------------------------- +`Cpu_Stress-Idle`_ graph displays the Average,Maximum and Minimum latency values obtained by running Idle_Idle test-type of the Cyclictest. Idle_Idle implies that CPU stress is applied on the Host and no stress on the Guest. + +.. _Cpu_stress-Idle: http://testresults.opnfv.org/grafana/dashboard/db/kvmfornfv-cyclictest?panelId=11&fullscreen + +.. Figure:: ../images/Cpustress-Idle.png + +3. Memory_Stress-Idle Graph +---------------------------- +`Memory_Stress-Idle`_ graph displays the Average,Maximum and Minimum latency values obtained by running Idle_Idle test-type of the Cyclictest. Idle_Idle implies that Memory stress is applied on the Host and no stress on the Guest. + +.. _Memory_Stress-Idle: http://testresults.opnfv.org/grafana/dashboard/db/kvmfornfv-cyclictest?panelId=12&fullscreen + +.. Figure:: ../images/Memorystress-Idle.png + +4. IO_Stress-Idle Graph +------------------------ +`IO_Stress-Idle`_ graph displays the Average,Maximum and Minimum latency values obtained by running Idle_Idle test-type of the Cyclictest. Idle_Idle implies that IO stress is applied on the Host and no stress on the Guest. + +.. _IO_Stress-Idle: http://testresults.opnfv.org/grafana/dashboard/db/kvmfornfv-cyclictest?panelId=13&fullscreen + +.. Figure:: ../images/IOstress-Idle.png diff --git a/docs/userguide/low_latency.userguide.rst b/docs/userguide/low_latency.userguide.rst index e0d2791df..66e63770c 100644 --- a/docs/userguide/low_latency.userguide.rst +++ b/docs/userguide/low_latency.userguide.rst @@ -66,3 +66,47 @@ Run-time Environment Setup Not only are special kernel parameters needed but a special run-time environment is also required. Please refer to `tunning.userguide` for more explanation. + +Test cases to measure Latency +============================= + +Cyclictest case +--------------- + +Understanding the naming convention +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Idle-Idle test-type +~~~~~~~~~~~~~~~~~~~ + +CPU_Stress-Idle test-type +------------------------- + +Memory_Stress-Idle test-type +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +IO_Stress-Idle test-type +~~~~~~~~~~~~~~~~~~~~~~~~ + +CPU_Stress-CPU_Stress test-type +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Memory_Stress-Memory_Stress test-type +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +IO_Stress-IO_Stress test type +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Packet Forwarding Test case +--------------------------- + +Packet forwarding to Host +~~~~~~~~~~~~~~~~~~~~~~~~~ + +Packet forwarding to Guest +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Packet forwarding to Guest using SRIOV +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + + diff --git a/docs/userguide/packet_forwarding.userguide.rst b/docs/userguide/packet_forwarding.userguide.rst new file mode 100644 index 000000000..ba117508c --- /dev/null +++ b/docs/userguide/packet_forwarding.userguide.rst @@ -0,0 +1,555 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. + +.. http://creativecommons.org/licenses/by/4.0 + +================= +PACKET FORWARDING +================= +======================= +About Packet Forwarding +======================= + +Packet Forwarding is a test suite of KVMFORNFV which is used to measure the total time taken by a +**Packet** generated by the traffic generator to return from Guest/Host as per the implemented +scenario. Packet Forwarding is implemented using VSWITCHPERF/``VSPERF software of OPNFV`` and an +``IXIA Traffic Generator``. + +Version Features +---------------- + ++-----------------------------+---------------------------------------------------+ +| | | +| **Release** | **Features** | +| | | ++=============================+===================================================+ +| | - Packet Forwarding is not part of Colorado | +| Colorado | release of KVMFORNFV | +| | | ++-----------------------------+---------------------------------------------------+ +| | - Packet Forwarding is a testcase in KVMFORNFV | +| | - Implements three scenarios (Host/Guest/SRIOV) | +| | as part of testing in KVMFORNFV | +| Danube | - Uses available testcases of OPNFV's VSWTICHPERF | +| | software (PVP/PVVP) | +| | - Works with IXIA Traffic Generator | ++-----------------------------+---------------------------------------------------+ + +====== +VSPERF +====== + +VSPerf is an OPNFV testing project. +VSPerf will develop a generic and architecture agnostic vSwitch testing framework and associated +tests, that will serve as a basis for validating the suitability of different vSwitch +implementations in a Telco NFV deployment environment. The output of this project will be utilized +by the OPNFV Performance and Test group and its associated projects, as part of OPNFV Platform and +VNF level testing and validation. + +For complete VSPERF documentation go to `link.`_ + +.. _link.: <http://artifacts.opnfv.org/vswitchperf/colorado/index.html> + + +Installation +------------ +Guidelines of installating `VSPERF`_. + +.. _VSPERF: <http://artifacts.opnfv.org/vswitchperf/colorado/configguide/index.html> + +Supported Operating Systems +--------------------------- + +* CentOS 7 +* Fedora 20 +* Fedora 21 +* Fedora 22 +* RedHat 7.2 +* Ubuntu 14.04 + +Supported vSwitches +------------------- +The vSwitch must support Open Flow 1.3 or greater. + +* OVS (built from source). +* OVS with DPDK (built from source). + +Supported Hypervisors +--------------------- + +* Qemu version 2.3. + +Other Requirements +------------------ +The test suite requires Python 3.3 and relies on a number of other +packages. These need to be installed for the test suite to function. + +Installation of required packages, preparation of Python 3 virtual +environment and compilation of OVS, DPDK and QEMU is performed by +script **systems/build_base_machine.sh**. It should be executed under +user account, which will be used for vsperf execution. + + **Please Note:** Password-less sudo access must be configured for given user + before script is executed. + +Execution of installation script: + +.. code:: bashFtrace.debugging.tool.userguide.rst + + $ cd Vswitchperf + $ cd systems + $ ./build_base_machine.sh + +Script **build_base_machine.sh** will install all the vsperf dependencies +in terms of system packages, Python 3.x and required Python modules. +In case of CentOS 7 it will install Python 3.3 from an additional repository +provided by Software Collections (`a link`_). In case of RedHat 7 it will +install Python 3.4 as an alternate installation in /usr/local/bin. Installation +script will also use `virtualenv`_ to create a vsperf virtual environment, +which is isolated from the default Python environment. This environment will +reside in a directory called **vsperfenv** in $HOME. + +You will need to activate the virtual environment every time you start a +new shell session. Its activation is specific to your OS: + +For running testcases VSPERF is installed on Intel pod1-node2 in which centos +operating system is installed. Only VSPERF installion on Centos is discussed here. +For installation steps on other operating systems please refer to `here`_. + +.. _here: <http://artifacts.opnfv.org/vswitchperf/colorado/configguide/index.html> + +For CentOS 7 +----------------- + +## Python 3 Packages + +To avoid file permission errors and Python version issues, use virtualenv to create an isolated environment with Python3. +The required Python 3 packages can be found in the `requirements.txt` file in the root of the test suite. +They can be installed in your virtual environment like so: + +.. code:: bash + + scl enable python33 bash + # Create virtual environment + virtualenv vsperfenv + cd vsperfenv + source bin/activate + pip install -r requirements.txt + + +You need to activate the virtual environment every time you start a new shell session. +To activate, simple run: + +.. code:: bash + + scl enable python33 bash + cd vsperfenv + source bin/activate + + +Working Behind a Proxy +----------------------- + +If you're behind a proxy, you'll likely want to configure this before running any of the above. For example: + +.. code:: bash + + export http_proxy=proxy.mycompany.com:123 + export https_proxy=proxy.mycompany.com:123 + + + +.. _a link: http://www.softwarecollections.org/en/scls/rhscl/python33/ +.. _virtualenv: https://virtualenv.readthedocs.org/en/latest/ + +For other OS specific activation click `this link`_: + +.. _this link: http://artifacts.opnfv.org/vswitchperf/colorado/configguide/installation.html#other-requirements + +Traffic-Generators +------------------- +VSPERF supports many Traffic-generators. For configuring VSPERF to work with the available traffic-generator go through `this`_. + +.. _this: <http://artifacts.opnfv.org/vswitchperf/colorado/configguide/trafficgen.html> + +VSPERF supports the following traffic generators: + + * Dummy (DEFAULT): Allows you to use your own external + traffic generator. + * IXIA (IxNet and IxOS) + * Spirent TestCenter + * Xena Networks + * MoonGen + +To see the list of traffic gens from the cli: + +.. code-block:: console + + $ ./vsperf --list-trafficgens + +This guide provides the details of how to install +and configure the various traffic generators. + +As KVM4NFV uses only IXIA traffic generator, it is discussed here. For complete documentation regarding traffic generators please follow this `link`_. + +.. _link: <https://gerrit.opnfv.org/gerrit/gitweb?p=vswitchperf.git;a=blob;f=docs/configguide/trafficgen.rst;h=85fc35b886d30db3b92a6b7dcce7ca742b70cbdc;hb=HEAD> + +========== +IXIA Setup +========== + +===================== +Hardware Requirements +===================== +VSPERF requires the following hardware to run tests: IXIA traffic generator (IxNetwork), a machine that runs the IXIA client software and a CentOS Linux release 7.1.1503 (Core) host. + +Installation +------------- + +Follow the [installation instructions] to install. + +IXIA Setup +------------ +On the CentOS 7 system +---------------------- +You need to install IxNetworkTclClient$(VER_NUM)Linux.bin.tgz. + +On the IXIA client software system +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Find the IxNetwork TCL server app (start -> All Programs -> IXIA -> IxNetwork -> IxNetwork_$(VER_NUM) -> IxNetwork TCL Server) + - Right click on IxNetwork TCL Server, select properties + - Under shortcut tab in the Target dialogue box make sure there is the argument "-tclport xxxx" where xxxx is your port number (take note of this port number you will need it for the 10_custom.conf file). + +.. Figure:: ../images/IXIA1.png + +- Hit Ok and start the TCL server application + +VSPERF configuration +-------------------- + +There are several configuration options specific to the IxNetworks traffic generator +from IXIA. It is essential to set them correctly, before the VSPERF is executed +for the first time. + +Detailed description of options follows: + + * TRAFFICGEN_IXNET_MACHINE - IP address of server, where IxNetwork TCL Server is running + * TRAFFICGEN_IXNET_PORT - PORT, where IxNetwork TCL Server is accepting connections from + TCL clients + * TRAFFICGEN_IXNET_USER - username, which will be used during communication with IxNetwork + TCL Server and IXIA chassis + * TRAFFICGEN_IXIA_HOST - IP address of IXIA traffic generator chassis + * TRAFFICGEN_IXIA_CARD - identification of card with dedicated ports at IXIA chassis + * TRAFFICGEN_IXIA_PORT1 - identification of the first dedicated port at TRAFFICGEN_IXIA_CARD + at IXIA chassis; VSPERF uses two separated ports for traffic generation. In case of + unidirectional traffic, it is essential to correctly connect 1st IXIA port to the 1st NIC + at DUT, i.e. to the first PCI handle from WHITELIST_NICS list. Otherwise traffic may not + be able to pass through the vSwitch. + * TRAFFICGEN_IXIA_PORT2 - identification of the second dedicated port at TRAFFICGEN_IXIA_CARD + at IXIA chassis; VSPERF uses two separated ports for traffic generation. In case of + unidirectional traffic, it is essential to correctly connect 2nd IXIA port to the 2nd NIC + at DUT, i.e. to the second PCI handle from WHITELIST_NICS list. Otherwise traffic may not + be able to pass through the vSwitch. + * TRAFFICGEN_IXNET_LIB_PATH - path to the DUT specific installation of IxNetwork TCL API + * TRAFFICGEN_IXNET_TCL_SCRIPT - name of the TCL script, which VSPERF will use for + communication with IXIA TCL server + * TRAFFICGEN_IXNET_TESTER_RESULT_DIR - folder accessible from IxNetwork TCL server, + where test results are stored, e.g. ``c:/ixia_results``; see test-results-share_ + * TRAFFICGEN_IXNET_DUT_RESULT_DIR - directory accessible from the DUT, where test + results from IxNetwork TCL server are stored, e.g. ``/mnt/ixia_results``; see + test-results-share_ + +.. _test-results-share: + +Test results share +------------------- + +VSPERF is not able to retrieve test results via TCL API directly. Instead, all test +results are stored at IxNetwork TCL server. Results are stored at folder defined by +``TRAFFICGEN_IXNET_TESTER_RESULT_DIR`` configuration parameter. Content of this +folder must be shared (e.g. via samba protocol) between TCL Server and DUT, where +VSPERF is executed. VSPERF expects, that test results will be available at directory +configured by ``TRAFFICGEN_IXNET_DUT_RESULT_DIR`` configuration parameter. + +Example of sharing configuration: + + * Create a new folder at IxNetwork TCL server machine, e.g. ``c:\ixia_results`` + * Modify sharing options of ``ixia_results`` folder to share it with everybody + * Create a new directory at DUT, where shared directory with results + will be mounted, e.g. ``/mnt/ixia_results`` + * Update your custom VSPERF configuration file as follows: + + .. code-block:: python + + TRAFFICGEN_IXNET_TESTER_RESULT_DIR = 'c:/ixia_results' + TRAFFICGEN_IXNET_DUT_RESULT_DIR = '/mnt/ixia_results' + + Note: It is essential to use slashes '/' also in path + configured by ``TRAFFICGEN_IXNET_TESTER_RESULT_DIR`` parameter. + * Install cifs-utils package. + + e.g. at rpm based Linux distribution: + + .. code-block:: console + + yum install cifs-utils + + * Mount shared directory, so VSPERF can access test results. + + e.g. by adding new record into ``/etc/fstab`` + + .. code-block:: console + + mount -t cifs //_TCL_SERVER_IP_OR_FQDN_/ixia_results /mnt/ixia_results + -o file_mode=0777,dir_mode=0777,nounix + +It is recommended to verify, that any new file inserted into ``c:/ixia_results`` folder +is visible at DUT inside ``/mnt/ixia_results`` directory. + + +Cloning and building src dependencies +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +In order to run VSPERF, you will need to download DPDK and OVS. You can do this manually and build +them in a preferred location, or you could use vswitchperf/src. The vswitchperf/src directory +contains makefiles that will allow you to clone and build the libraries that VSPERF depends on, +such as DPDK and OVS. To clone and build simply: + +.. code:: bash + + cd src + make + +To delete a src subdirectory and its contents to allow you to re-clone simply use: + +.. code:: bash + + make cleanse + +Configure the `./conf/10_custom.conf` file +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +The supplied `10_custom.conf` file must be modified, as it contains configuration items for which there are no reasonable default values. + +The configuration items that can be added is not limited to the initial contents. Any configuration item mentioned in any .conf file in `./conf` directory can be added and that item will be overridden by the custom +configuration value. + +Using a custom settings file +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Alternatively a custom settings file can be passed to `vsperf` via the `--conf-file` argument. + +.. code:: bash + + ./vsperf --conf-file <path_to_settings_py> ... + +Note that configuration passed in via the environment (`--load-env`) or via another command line +argument will override both the default and your custom configuration files. This +"priority hierarchy" can be described like so (1 = max priority): + +1. Command line arguments +2. Environment variables +3. Configuration file(s) + +Executing tests +~~~~~~~~~~~~~~~~ +Before running any tests make sure you have root permissions by adding the following line to /etc/sudoers: +.. code:: bash + + username ALL=(ALL) NOPASSWD: ALL + +username in the example above should be replaced with a real username. + +To list the available tests: + +.. code:: bash + + ./vsperf --list-tests + + +To run a group of tests, for example all tests with a name containing +'RFC2544': + +.. code:: bash + + ./vsperf --conf-file=user_settings.py --tests="RFC2544" + +To run all tests: + +.. code:: bash + + ./vsperf --conf-file=user_settings.py + +Some tests allow for configurable parameters, including test duration (in seconds) as well as packet sizes (in bytes). + +.. code:: bash + + ./vsperf --conf-file user_settings.py + --tests RFC2544Tput + --test-param "rfc2544_duration=10;packet_sizes=128" + +For all available options, check out the help dialog: + +.. code:: bash + + ./vsperf --help + + +Testcases +---------- +Available Tests in VSPERF are: + + * phy2phy_tput + * phy2phy_forwarding + * back2back + * phy2phy_tput_mod_vlan + * phy2phy_cont + * pvp_cont + * pvvp_cont + * pvpv_cont + * phy2phy_scalability + * pvp_tput + * pvp_back2back + * pvvp_tput + * pvvp_back2back + * phy2phy_cpu_load + * phy2phy_mem_load + +VSPERF modes of operation +-------------------------- + +VSPERF can be run in different modes. By default it will configure vSwitch, +traffic generator and VNF. However it can be used just for configuration +and execution of traffic generator. Another option is execution of all +components except traffic generator itself. + +Mode of operation is driven by configuration parameter -m or --mode + +.. code-block:: console + + -m MODE, --mode MODE vsperf mode of operation; + Values: + "normal" - execute vSwitch, VNF and traffic generator + "trafficgen" - execute only traffic generator + "trafficgen-off" - execute vSwitch and VNF + "trafficgen-pause" - execute vSwitch and VNF but wait before traffic transmission + +In case, that VSPERF is executed in "trafficgen" mode, then configuration +of traffic generator can be modified through ``TRAFFIC`` dictionary passed to the +``--test-params`` option. It is not needed to specify all values of ``TRAFFIC`` +dictionary. It is sufficient to specify only values, which should be changed. +Detailed description of ``TRAFFIC`` dictionary can be found at: ref:`configuration-of-traffic-dictionary`. + +Example of execution of VSPERF in "trafficgen" mode: + +.. code-block:: console + + $ ./vsperf -m trafficgen --trafficgen IxNet --conf-file vsperf.conf \ + --test-params "TRAFFIC={'traffic_type':'rfc2544_continuous','bidir':'False','framerate':60}" + + +================================ +Packet Forwarding Test Scenarios +================================ +KVMFORNFV currently implements three scenarios as part of testing: + + * Host Scenario + * Guest Scenario. + * SR-IOV Scenario. + + +Packet Forwarding Host Scenario +------------------------------- +Here Host is NODE-2. It has VSPERF installed in it and is properly configured to use IXIA Traffic-generator by providing IXIA CARD, PORTS and Lib paths along with IP. +please refer to figure.2 + +.. Figure:: ../images/Host_Scenario.png + +Packet Forwarding Guest Scenario +-------------------------------- +Here the guest is a Virtual Machine (VM) launched by using a modified CentOS image(vsperf provided) +on Node-2 (Host) using Qemu. In this scenario, the packet is initially forwarded to Host which is +then forwarded to the launched guest. The time taken by the packet to reach the IXIA traffic-generator +via Host and Guest is calculated and published as a test result of this scenario. + +.. Figure:: ../images/Guest_Scenario.png + +Packet Forwarding SRIOV Scenario +-------------------------------- +Unlike the packet forwarding to Guest-via-Host scenario, here the packet generated at the IXIA is +directly forwarded to the Guest VM launched on Host by implementing SR-IOV interface at NIC level +of Host .i.e., Node-2. The time taken by the packet to reach the IXIA traffic-generator is calculated +and published as a test result for this scenario. SRIOV-support_ is given below, it details how to use SR-IOV. + +.. Figure:: ../images/SRIOV_Scenario.png + +Using vfio_pci with DPDK +------------------------ + +To use vfio with DPDK instead of igb_uio add into your custom configuration +file the following parameter: + +.. code-block:: python + + PATHS['dpdk']['src']['modules'] = ['uio', 'vfio-pci'] + + +**NOTE:** In case, that DPDK is installed from binary package, then please + + set ``PATHS['dpdk']['bin']['modules']`` instead. + +**NOTE:** Please ensure that Intel VT-d is enabled in BIOS. + +**NOTE:** Please ensure your boot/grub parameters include +the following: + +.. code-block:: console + + iommu=pt intel_iommu=on + +To check that IOMMU is enabled on your platform: + +.. code-block:: console + + $ dmesg | grep IOMMU + [ 0.000000] Intel-IOMMU: enabled + [ 0.139882] dmar: IOMMU 0: reg_base_addr fbffe000 ver 1:0 cap d2078c106f0466 ecap f020de + [ 0.139888] dmar: IOMMU 1: reg_base_addr ebffc000 ver 1:0 cap d2078c106f0466 ecap f020de + [ 0.139893] IOAPIC id 2 under DRHD base 0xfbffe000 IOMMU 0 + [ 0.139894] IOAPIC id 0 under DRHD base 0xebffc000 IOMMU 1 + [ 0.139895] IOAPIC id 1 under DRHD base 0xebffc000 IOMMU 1 + [ 3.335744] IOMMU: dmar0 using Queued invalidation + [ 3.335746] IOMMU: dmar1 using Queued invalidation + .... + +.. _SRIOV-support: + +Using SRIOV support +------------------- + +To use virtual functions of NIC with SRIOV support, use extended form +of NIC PCI slot definition: + +.. code-block:: python + + WHITELIST_NICS = ['0000:03:00.0|vf0', '0000:03:00.1|vf3'] + +Where ``vf`` is an indication of virtual function usage and following +number defines a VF to be used. In case that VF usage is detected, +then vswitchperf will enable SRIOV support for given card and it will +detect PCI slot numbers of selected VFs. + +So in example above, one VF will be configured for NIC '0000:05:00.0' +and four VFs will be configured for NIC '0000:05:00.1'. Vswitchperf +will detect PCI addresses of selected VFs and it will use them during +test execution. + +At the end of vswitchperf execution, SRIOV support will be disabled. + +SRIOV support is generic and it can be used in different testing scenarios. +For example: + + +* vSwitch tests with DPDK or without DPDK support to verify impact + of VF usage on vSwitch performance +* tests without vSwitch, where traffic is forwared directly + between VF interfaces by packet forwarder (e.g. testpmd application) +* tests without vSwitch, where VM accesses VF interfaces directly + by PCI-passthrough to measure raw VM throughput performance. + diff --git a/docs/userguide/pcm_utility.userguide.rst b/docs/userguide/pcm_utility.userguide.rst new file mode 100644 index 000000000..baef7059a --- /dev/null +++ b/docs/userguide/pcm_utility.userguide.rst @@ -0,0 +1,126 @@ +========================================================= +Collecting Memory Bandwidth Information using PCM utility +========================================================= + +About PCM utility +----------------- +The Intel® Performance Counter Monitor provides sample C++ routines and utilities to estimate the +internal resource utilization of the latest Intel® Xeon® and Core™ processors and gain a significant +performance boost.In Intel PCM toolset,there is a pcm-memory.x tool which is used for observing the +memory traffic intensity + +Version Features +----------------- + ++-----------------------------+-----------------------------------------------+ +| | | +| **Release** | **Features** | +| | | ++=============================+===============================================+ +| | - In Colorado release,we don't have memory | +| Colorado | bandwidth information collected through the | +| | cyclic testcases. | +| | | ++-----------------------------+-----------------------------------------------+ +| | - pcm-memory.x provides the memory bandwidth | +| | data throught out the testcases | +| | - pcm-memory.x will be executedbefore the | +| Danube | execution of every testcase | +| | - used for all test-types (stress/idle) | +| | - Generated memory bandwidth logs which are | +| | to published to the KVMFORFNV artifacts | ++-----------------------------+-----------------------------------------------+ + +Implementation of pcm-memory.x: +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The tool measures the memory bandwidth observed for every channel reporting seperately throughputs +for reads from memory and writes to the memory.pcm-memory.x tool tends to report values slightly +higher than the application's own measurement. + +Command: + +.. code:: bash + + sudo ./pcm-memory.x [Delay]/[external_program] + +Parameters + +- pcm-memory can called with either delay or external_program/application as a parameter + +- If delay is given as 5,then the output will be produced with refresh of every 5 seconds. + +- If external_program is script/application,then the output will produced after the execution of the application or the script passed as a parameter. + +**Sample Output:** + + The output produced with default refresh of 1 second. + ++---------------------------------------+---------------------------------------+ +| Socket 0 | Socket 1 | ++=======================================+=======================================+ +| Memory Performance Monitoring | Memory Performance Monitoring | +| | | ++---------------------------------------+---------------------------------------+ +| Mem Ch 0: Reads (MB/s): 6870.81 | Mem Ch 0: Reads (MB/s): 7406.36 | +| Writes(MB/s): 1805.03 | Writes(MB/s): 1951.25 | +| Mem Ch 1: Reads (MB/s): 6873.91 | Mem Ch 1: Reads (MB/s): 7411.11 | +| Writes(MB/s): 1810.86 | Writes(MB/s): 1957.73 | +| Mem Ch 2: Reads (MB/s): 6866.77 | Mem Ch 2: Reads (MB/s): 7403.39 | +| Writes(MB/s): 1804.38 | Writes(MB/s): 1951.42 | +| Mem Ch 3: Reads (MB/s): 6867.47 | Mem Ch 3: Reads (MB/s): 7403.66 | +| Writes(MB/s): 1805.53 | Writes(MB/s): 1950.95 | +| | | +| NODE0 Mem Read (MB/s): 27478.96 | NODE1 Mem Read (MB/s): 29624.51 | +| NODE0 Mem Write (MB/s): 7225.79 | NODE1 Mem Write (MB/s): 7811.36 | +| NODE0 P. Write (T/s) : 214810 | NODE1 P. Write (T/s): 238294 | +| NODE0 Memory (MB/s): 34704.75 | NODE1 Memory (MB/s): 37435.87 | ++---------------------------------------+---------------------------------------+ +| - System Read Throughput(MB/s): 57103.47 | +| - System Write Throughput(MB/s): 15037.15 | +| - System Memory Throughput(MB/s): 72140.62 | ++-------------------------------------------------------------------------------+ + +pcm-memory.x in KVMFORNFV: +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +pcm-memory is a part of KVMFORNFV in D release.pcm-memory.x will be executed with delay of 60 seconds +before starting every testcase to monitor the memory traffic intensity which was handled in +collect_MBWInfo function .The memory bandwidth information will be collected into the logs through +the testcase updating every 60 seconds. + + **Pre-requisites:** + + 1.Check for the processors supported by PCM .Latest pcm utility version (2.11)support Intel® Xeon® E5 v4 processor family. + + 2.Disabling NMI Watch Dog + + 3.Installing MSR registers + + +Memory Bandwidth logs for KVMFORNFV can be found `here`_: + +.. code:: bash + + http://artifacts.opnfv.org/kvmfornfv.html + +.. _here: http://artifacts.opnfv.org/kvmfornfv.html + +Details of the function implemented: + +In install_Pcm function, it handles the installation of pcm utility and the required prerequisites for pcm-memory.x tool to execute. + +.. code:: bash + + git clone https://github.com/opcm/pcm + cd pcm + make + +In collect_MBWInfo Function,the below command is executed on the node which was collected to the logs +with the timestamp and testType.The function will be called at the begining of each testcase and +signal will be passed to terminate the pcm-memory process which was executing throughout the cyclic testcase. + +.. code:: bash + + pcm-memory.x 60 &>/root/MBWInfo/MBWInfo_${testType}_${timeStamp} + |