From 7964c56aab8618d0b9a2b6d5c483a14b642d4afc Mon Sep 17 00:00:00 2001
From: Zhifeng Jiang <silkjzf@Zhifengs-MacBook-Pro.local>
Date: Sun, 9 Jul 2017 07:12:27 -0700
Subject: Add system information and test condition proposal for storage QPI.

JIRA: QTIP-246

Change-Id: Icdd3663ea4db89554055e97b411f5cb935504aea
Signed-off-by: Zhifeng Jiang <jiang.zhifeng@zte.com.cn>
---
 docs/proposal/storage-qpi.rst | 42 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 42 insertions(+)

diff --git a/docs/proposal/storage-qpi.rst b/docs/proposal/storage-qpi.rst
index af7c7954..8a82241e 100644
--- a/docs/proposal/storage-qpi.rst
+++ b/docs/proposal/storage-qpi.rst
@@ -13,6 +13,48 @@ The measurement is done by `StorPerf`_.
 
 .. _StorPerf: https://wiki.opnfv.org/display/storperf
 
+System Information
+==================
+
+System Information are environmental parameters and factors may affect storage performance:
+
++--------------------------------+-------------------------------------------------+-----------------------------------------------------------------------------+
+| System Factors                 | Detail                                          | Extraction Method                                                           |
++--------------------------------+-------------------------------------------------+-----------------------------------------------------------------------------+
+| Ceph Node List                 | List of nodes which has ceph-osd roles.         | Getting from return result of installer node list CLI command.              |
+|                                | For example [node-2, node-3, node-4].           |                                                                             |
++--------------------------------+-------------------------------------------------+-----------------------------------------------------------------------------+
+| Ceph Client RDB Cache Mode     | Values: "None", "write-through", "write-back".  | Getting from value of "rbd cache" and "rbd cache max dirty" keys            |
+|                                |                                                 | in client section of ceph configuration; To enable write-through mode,      |
+|                                |                                                 | set rbd cache max dirty to 0.                                               |
++--------------------------------+-------------------------------------------------+-----------------------------------------------------------------------------+
+| Ceph Client RDB Cache Size     | The RBD cache size in bytes. Default is 32 MiB. | Getting from value of "rdb cache size" key in client section                |
+|                                |                                                 | of ceph configuration.                                                      |
++--------------------------------+-------------------------------------------------+-----------------------------------------------------------------------------+
+| Ceph OSD Tier Cache Mode       | Values: "None", "Write-back", "Readonly".       | Getting from ceph CLI "ceph report" output info.                            |
++--------------------------------+-------------------------------------------------+-----------------------------------------------------------------------------+
+| Use SSD Backed OSD Cache       | Values: "Yes", "No".                            | Getting from POD description and CEPH CLI "ceph-disk list" output info.     |
++--------------------------------+-------------------------------------------------+-----------------------------------------------------------------------------+
+| Use SSD For Journal            | Values: "Yes", "No".                            | Getting from POD description and CEPH CLI "ceph-disk list" output info.     |
++--------------------------------+-------------------------------------------------+-----------------------------------------------------------------------------+
+| Ceph Cluster Network Bandwidth | Values: "1G", "10G", "40G".                     | Getting from physical interface information in POD description,             |
+|                                |                                                 | "ifconfig" output info on ceph osd node,                                    |
+|                                |                                                 | and value of "cluster network" key in global section of ceph configuration. |
++--------------------------------+-------------------------------------------------+-----------------------------------------------------------------------------+
+
+Test Condition
+==============
+
++-----------------------------+------------------------------------------+-------------------------------------------------------------+
+| Test Condition              | Detail                                   | Extraction Method                                           |
++-----------------------------+------------------------------------------+-------------------------------------------------------------+
+| Number of Testing VMs       | Number of VMs which are created,         | It equals the number of Cinder nodes of the SUT.            |
+|                             | during running Storperf test case.       |                                                             |
++-----------------------------+------------------------------------------+-------------------------------------------------------------+
+| Distribution of Testing VMS | Number of VMs on each computer node,     | Recording the distribution  when runing Storperf test case. |
+|                             | for example [(node-2: 1), (node-3: 2))]. |                                                             |
++-----------------------------+------------------------------------------+-------------------------------------------------------------+
+
 Baseline
 ========
 
-- 
cgit