aboutsummaryrefslogtreecommitdiffstats
path: root/docs/proposal
diff options
context:
space:
mode:
authorYujun Zhang <zhang.yujunz@zte.com.cn>2017-03-23 16:20:05 +0800
committerYujun Zhang <zhang.yujunz@zte.com.cn>2017-03-23 17:14:43 +0800
commitc8a6d44d0cd39c2dc658b28056fe9782006e6e06 (patch)
tree7ce1b574b346478d96023ecce37e6cde48c54630 /docs/proposal
parent5c4c42d794a8f3ca0708098790320d2a022ec8ec (diff)
Cleanup `docs` folder
- move legacy document to `/legacy/docs` - move proposals to `docs/proposal` - remove unused `.gitkeep` files Change-Id: I1ad83ae98b7a6b3bb1738ced9b1f0d22c9c296b6 Signed-off-by: Yujun Zhang <zhang.yujunz@zte.com.cn>
Diffstat (limited to 'docs/proposal')
-rw-r--r--docs/proposal/dashboard.rst151
-rw-r--r--docs/proposal/integration_with_yardstick.rst92
2 files changed, 243 insertions, 0 deletions
diff --git a/docs/proposal/dashboard.rst b/docs/proposal/dashboard.rst
new file mode 100644
index 00000000..60c4720d
--- /dev/null
+++ b/docs/proposal/dashboard.rst
@@ -0,0 +1,151 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) 2016 ZTE Corp.
+
+
+*********
+Dashboard
+*********
+
+The dashboard gives user an intuitive view of benchmark result.
+
+Purpose
+=======
+
+The basic element to be displayed is QPI a.k.a. QTIP Performance Index. But it
+is also important to show user
+
+#. How is the final score calculated?
+#. Under what condition is the test plan executed?
+#. How many runs of a performance tests have been executed and is there any deviation?
+#. Comparison of benchmark result from different PODs or configuration
+
+Templates
+=========
+
+Different board templates are created to satisfy the above requirements.
+
+Composition
+-----------
+
+QTIP gives a simple score but there must be a complex formula behind it. This
+view explains the composition of the QPI.
+
+Condition
+---------
+
+The condition of a benchmark result includes
+
+* System Under Test
+
+ * Hardware environment
+ * Hypervisor version
+ * Operation System release version
+ * System Configuration
+
+* Test Tools
+
+ * Release version
+ * Configuration
+
+* Test Facility
+
+ * Laboratory
+ * Engineer
+ * Date
+
+Conditions that do NOT have an obvious affect on the test result may be ignored,
+e.g. temperature, power supply.
+
+Stats
+-----
+
+Performance tests are actually measurement of specific metrics. All measurement
+comes with uncertainty. The final result is normally one or a group of metrics
+calculated from many repeats.
+
+For each metric, the stats board shall consist of a diagram of all measured
+values and a box of stats::
+
+ ^ +------------+
+ | | count: ? |
+ | |average: ? |
+ | | min: ? |
+ | X | max: ? |
+ | XXXX XXXX X XXXXX | |
+ |X XX XX XX XXX XXX XX | |
+ | XXXXXX X XXXXX XX | |
+ | | |
+ | | |
+ | | |
+ | | |
+ | | |
+ +---------------------------------------------> +------------+
+
+The type of diagram and selection of stats shall depend on what metric to show.
+
+Comparison
+----------
+
+Comparison can be done between different PODs or different configuration on the
+same PODs.
+
+In a comparison view, the metrics are displayed in the same diagram. And the
+parameters are listed side by side.
+
+Both common parameters and different parameters are listed. Common values are
+merged to the same cell. And user may configure the view to hide common rows.
+
+A draft design is as following::
+
+ ^
+ |
+ |
+ |
+ | XXXXXXXX
+ | XXX XX+-+ XXXXXXXXXX
+ | XXX +XXXX XXXXX
+ +-+XX X +--+ ++ XXXXXX +-+
+ | X+-+X +----+ +-+ +----+X
+ |X +--+ +---+ XXXXXX X
+ | +-------+ X
+ |
+ |
+ +----------------------------------------------------->
+
+ +--------------------+----------------+---------------+
+ | different param 1 | | |
+ | | | |
+ +-----------------------------------------------------+
+ | different param 2 | | |
+ | | | |
+ +-------------------------------------+---------------+
+ | common param 1 | |
+ | | |
+ +-------------------------------------+---------------+
+ | different param 3 | | |
+ | | | |
+ +-------------------------------------+---------------+
+ | common param 2 | |
+ | | |
+ +--------------------+--------------------------------+
+ +------------+
+ | HIDE COMMON|
+ +------------+
+
+Time line
+---------
+
+Time line diagram for analysis of time critical performance test::
+
+ +-----------------+-----------+-------------+-------------+-----+
+ | | | | | |
+ +-----------------> | | | |
+ | +-----------> | | |
+ | ? ms +-------------> | |
+ | ? ms +------------>+ |
+ | ? ms ? ms |
+ | |
+ +---------------------------------------------------------------+
+
+The time cost between checkpoints shall be displayed in the diagram.
diff --git a/docs/proposal/integration_with_yardstick.rst b/docs/proposal/integration_with_yardstick.rst
new file mode 100644
index 00000000..a8298d6f
--- /dev/null
+++ b/docs/proposal/integration_with_yardstick.rst
@@ -0,0 +1,92 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) 2016 ZTE Corp.
+
+
+**************************
+Integration with Yardstick
+**************************
+
+Problem description
+===================
+
+For each specified QPI [1]_, QTIP needs to select a suite of test cases and collect
+required test results. Based on these results, Qtip calculates the score.
+
+Proposed change
+===============
+Qtip has a flexible architecture [2]_ to support different mode: standalone and agent.
+It is recommended to use **agent mode** to work with existing test runners. Yardstick will
+act as a runner to generate test result and trigger Qtip agent on the completion of test.
+
+
+Work Items in Yardstick
+-----------------------
+
+1. Create a customized suite in Yardstick
+
+Yardstick not only has many existing suites but also support customized suites. Qtip could
+create a suite named **Qtip-PoC** in Yardstick repo to verify workflow of Qtip agent mode.
+
+2. Launch Qtip in Yardstick
+
+Whether to launch Qtip will be determined by checking the existence of OS environment
+variable *QTIP*. If it exists, Qtip will be launched by using Yardstick CLI
+`yardstick plugin install` [3]_.
+
+3. Yardstick interacts with Qtip
+
+See
+`Yardstick-Qtip+integration <https://wiki.opnfv.org/display/yardstick/Yardstick-Qtip+integration>`_
+for details.
+
+Work Items in Qtip
+------------------
+
+1. Provide an API for Yardstick to post test result and environment info
+
+After completing test execution, Yardstick will post test result and enviroment info with
+JSON format via QTIP API. See
+`Yardstick-Qtip+integration <https://wiki.opnfv.org/display/yardstick/Yardstick-Qtip+integration>`_
+for details.
+
+2. Parse yardstick test result
+
+When Qtip agent receive Yarstick test result and enviroment info, Qtip agent will extract
+metrics which is definded in metric spec configuration file. Based on these metrics, Qtip
+agent will caculate QPI.
+
+3. Provide an API for querying QPI
+
+Qtip will provide an API for querying QPI. See
+`Yardstick-Qtip+integration <https://wiki.opnfv.org/display/yardstick/Yardstick-Qtip+integration>`_
+for details.
+
+Implementation
+==============
+
+Assignee(s)
+-----------
+
+*Primary assignee:*
+ wu.zhihui
+
+*Other contributors*
+ TBD
+
+Testing
+=======
+
+The changes will be covered by new unit test.
+
+Documentation
+=============
+
+TBD
+
+Reference
+=========
+
+.. [1] QTIP performance index
+.. [2] https://wiki.opnfv.org/display/qtip/Architecture
+.. [3] https://wiki.opnfv.org/display/yardstick/How+to+install+a+plug-in+into+Yardstick