summaryrefslogtreecommitdiffstats
path: root/docs
diff options
context:
space:
mode:
Diffstat (limited to 'docs')
-rw-r--r--docs/designspec/dashboard.rst91
-rw-r--r--docs/releasenotes/brahmaputra.rst78
-rw-r--r--docs/releasenotes/index.rst14
3 files changed, 178 insertions, 5 deletions
diff --git a/docs/designspec/dashboard.rst b/docs/designspec/dashboard.rst
index ad5520b6..60c4720d 100644
--- a/docs/designspec/dashboard.rst
+++ b/docs/designspec/dashboard.rst
@@ -57,14 +57,95 @@ The condition of a benchmark result includes
Conditions that do NOT have an obvious affect on the test result may be ignored,
e.g. temperature, power supply.
-Deviation
----------
-
-Performance tests are usually repeated many times to reduce random disturbance.
-This view shall show an overview of deviation among different runs.
+Stats
+-----
+
+Performance tests are actually measurement of specific metrics. All measurement
+comes with uncertainty. The final result is normally one or a group of metrics
+calculated from many repeats.
+
+For each metric, the stats board shall consist of a diagram of all measured
+values and a box of stats::
+
+ ^ +------------+
+ | | count: ? |
+ | |average: ? |
+ | | min: ? |
+ | X | max: ? |
+ | XXXX XXXX X XXXXX | |
+ |X XX XX XX XXX XXX XX | |
+ | XXXXXX X XXXXX XX | |
+ | | |
+ | | |
+ | | |
+ | | |
+ | | |
+ +---------------------------------------------> +------------+
+
+The type of diagram and selection of stats shall depend on what metric to show.
Comparison
----------
Comparison can be done between different PODs or different configuration on the
same PODs.
+
+In a comparison view, the metrics are displayed in the same diagram. And the
+parameters are listed side by side.
+
+Both common parameters and different parameters are listed. Common values are
+merged to the same cell. And user may configure the view to hide common rows.
+
+A draft design is as following::
+
+ ^
+ |
+ |
+ |
+ | XXXXXXXX
+ | XXX XX+-+ XXXXXXXXXX
+ | XXX +XXXX XXXXX
+ +-+XX X +--+ ++ XXXXXX +-+
+ | X+-+X +----+ +-+ +----+X
+ |X +--+ +---+ XXXXXX X
+ | +-------+ X
+ |
+ |
+ +----------------------------------------------------->
+
+ +--------------------+----------------+---------------+
+ | different param 1 | | |
+ | | | |
+ +-----------------------------------------------------+
+ | different param 2 | | |
+ | | | |
+ +-------------------------------------+---------------+
+ | common param 1 | |
+ | | |
+ +-------------------------------------+---------------+
+ | different param 3 | | |
+ | | | |
+ +-------------------------------------+---------------+
+ | common param 2 | |
+ | | |
+ +--------------------+--------------------------------+
+ +------------+
+ | HIDE COMMON|
+ +------------+
+
+Time line
+---------
+
+Time line diagram for analysis of time critical performance test::
+
+ +-----------------+-----------+-------------+-------------+-----+
+ | | | | | |
+ +-----------------> | | | |
+ | +-----------> | | |
+ | ? ms +-------------> | |
+ | ? ms +------------>+ |
+ | ? ms ? ms |
+ | |
+ +---------------------------------------------------------------+
+
+The time cost between checkpoints shall be displayed in the diagram.
diff --git a/docs/releasenotes/brahmaputra.rst b/docs/releasenotes/brahmaputra.rst
new file mode 100644
index 00000000..92fafd80
--- /dev/null
+++ b/docs/releasenotes/brahmaputra.rst
@@ -0,0 +1,78 @@
+***********
+Brahmaputra
+***********
+
+NOTE: The release note for OPNFV Brahmaputra is missing. This is a copy of the
+README.
+
+QTIP Benchmark Suite
+====================
+
+QTIP is a benchmarking suite intended to benchmark the following components of the OPNFV Platform:
+
+1. Computing components
+2. Networking components
+3. Storage components
+
+The efforts in QTIP are mostly focused on identifying
+
+1. Benchmarks to run
+2. Test cases in which these benchmarks to run
+3. Automation of suite to run benchmarks within different test cases
+4. Collection of test results
+
+QTIP Framework can now be called: (qtip.py).
+
+The Framework can run 5 computing benchmarks:
+
+1. Dhrystone
+2. Whetstone
+3. RamBandwidth
+4. SSL
+5. nDPI
+
+These benchmarks can be run in 2 test cases:
+
+1. VM vs Baremetal
+2. Baremetal vs Baremetal
+
+Instructions to run the script:
+
+1. Download and source the OpenStack `adminrc` file for the deployment on which you want to create the VM for benchmarking
+2. run `python qtip.py -s {SUITE} -b {BENCHMARK}`
+3. run `python qtip.py -h` for more help
+4. list of benchmarks can be found in the `qtip/test_cases` directory
+5. SUITE refers to compute, network or storage
+
+Requirements:
+
+1. Ansible 1.9.2
+2. Python 2.7
+3. PyYAML
+
+Configuring Test Cases:
+
+Test cases can be found within the `test_cases` directory.
+For each Test case, a Config.yaml file contains the details for the machines upon which the benchmarks would run.
+Edit the IP and the Password fields within the files for the machines on which the benchmark is to run.
+A robust framework that would allow to include more tests would be included within the future.
+
+Jump Host requirements:
+
+The following packages should be installed on the server from which you intend to run QTIP.
+
+1: Heat Client
+2: Glance Client
+3: Nova Client
+4: Neutron Client
+5: wget
+6: PyYaml
+
+Networking
+
+1: The Host Machines/compute nodes to be benchmarked should have public/access network
+2: The Host Machines/compute nodes should allow Password Login
+
+QTIP support for Foreman
+
+{TBA}
diff --git a/docs/releasenotes/index.rst b/docs/releasenotes/index.rst
new file mode 100644
index 00000000..5d045388
--- /dev/null
+++ b/docs/releasenotes/index.rst
@@ -0,0 +1,14 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) 2015 Dell Inc.
+.. (c) 2016 ZTE Corp.
+
+
+##################
+QTIP Release Notes
+##################
+
+.. toctree::
+ :maxdepth: 2
+
+ brahmaputra.rst