From fd54fcc22170aa880fc49730730ad80896e2e608 Mon Sep 17 00:00:00 2001 From: rexlee8776 Date: Wed, 8 Mar 2017 07:12:55 +0000 Subject: Yardstick Preliminary Documentation JIRA: YARDSTICK-554 align with opnfvdocs path structure about testing projects Change-Id: I6c2f2d37e41447dccd76b9f4426d00fd85cb1e3b Signed-off-by: rexlee8776 --- docs/release/index.rst | 13 - docs/release/release-notes.rst | 693 ------------------- docs/release/release-notes/index.rst | 13 + docs/release/release-notes/release-notes.rst | 693 +++++++++++++++++++ docs/release/results/index.rst | 14 + docs/release/results/os-nosdn-kvm-ha.rst | 270 ++++++++ docs/release/results/os-nosdn-nofeature-ha.rst | 492 ++++++++++++++ docs/release/results/os-nosdn-nofeature-noha.rst | 259 +++++++ docs/release/results/os-odl_l2-bgpvpn-ha.rst | 53 ++ docs/release/results/os-odl_l2-nofeature-ha.rst | 743 +++++++++++++++++++++ docs/release/results/os-odl_l2-sfc-ha.rst | 231 +++++++ docs/release/results/os-onos-nofeature-ha.rst | 257 +++++++ docs/release/results/os-onos-sfc-ha.rst | 517 ++++++++++++++ docs/release/results/overview.rst | 106 +++ docs/release/results/results.rst | 57 ++ docs/release/results/yardstick-opnfv-ha.rst | 118 ++++ docs/release/results/yardstick-opnfv-kvm.rst | 38 ++ docs/release/results/yardstick-opnfv-parser.rst | 38 ++ docs/release/results/yardstick-opnfv-vtc.rst | 248 +++++++ docs/results/index.rst | 14 - docs/results/os-nosdn-kvm-ha.rst | 270 -------- docs/results/os-nosdn-nofeature-ha.rst | 492 -------------- docs/results/os-nosdn-nofeature-noha.rst | 259 ------- docs/results/os-odl_l2-bgpvpn-ha.rst | 53 -- docs/results/os-odl_l2-nofeature-ha.rst | 743 --------------------- docs/results/os-odl_l2-sfc-ha.rst | 231 ------- docs/results/os-onos-nofeature-ha.rst | 257 ------- docs/results/os-onos-sfc-ha.rst | 517 -------------- docs/results/overview.rst | 106 --- docs/results/results.rst | 57 -- docs/results/yardstick-opnfv-ha.rst | 118 ---- docs/results/yardstick-opnfv-kvm.rst | 38 -- docs/results/yardstick-opnfv-parser.rst | 38 -- docs/results/yardstick-opnfv-vtc.rst | 248 ------- docs/testing/user/userguide/01-introduction.rst | 79 +++ docs/testing/user/userguide/02-methodology.rst | 195 ++++++ docs/testing/user/userguide/03-architecture.rst | 266 ++++++++ docs/testing/user/userguide/04-vtc-overview.rst | 122 ++++ .../user/userguide/05-apexlake_installation.rst | 300 +++++++++ docs/testing/user/userguide/06-apexlake_api.rst | 89 +++ docs/testing/user/userguide/07-nsb-overview.rst | 177 +++++ .../testing/user/userguide/08-nsb_installation.rst | 253 +++++++ docs/testing/user/userguide/09-installation.rst | 401 +++++++++++ .../testing/user/userguide/10-yardstick_plugin.rst | 144 ++++ .../user/userguide/11-result-store-InfluxDB.rst | 86 +++ docs/testing/user/userguide/12-grafana.rst | 119 ++++ docs/testing/user/userguide/13-list-of-tcs.rst | 129 ++++ .../user/userguide/Yardstick_task_templates.rst | 160 +++++ docs/testing/user/userguide/comp-intro.rst | 37 + docs/testing/user/userguide/glossary.rst | 65 ++ docs/testing/user/userguide/images/Deployment.png | Bin 0 -> 17958 bytes .../user/userguide/images/Grafana_config.png | Bin 0 -> 143507 bytes .../user/userguide/images/InfluxDB_store.png | Bin 0 -> 1623955 bytes .../testing/user/userguide/images/Logical_view.png | Bin 0 -> 58840 bytes docs/testing/user/userguide/images/TC002.png | Bin 0 -> 106382 bytes docs/testing/user/userguide/images/Use_case.png | Bin 0 -> 105787 bytes docs/testing/user/userguide/images/add.png | Bin 0 -> 169904 bytes docs/testing/user/userguide/images/login.png | Bin 0 -> 32761 bytes .../userguide/images/results_visualization.png | Bin 0 -> 41905 bytes .../user/userguide/images/test_execution_flow.png | Bin 0 -> 51473 bytes docs/testing/user/userguide/index.rst | 27 + .../user/userguide/opnfv_yardstick_tc001.rst | 133 ++++ .../user/userguide/opnfv_yardstick_tc002.rst | 126 ++++ .../user/userguide/opnfv_yardstick_tc004.rst | 110 +++ .../user/userguide/opnfv_yardstick_tc005.rst | 125 ++++ .../user/userguide/opnfv_yardstick_tc006.rst | 144 ++++ .../user/userguide/opnfv_yardstick_tc007.rst | 162 +++++ .../user/userguide/opnfv_yardstick_tc008.rst | 90 +++ .../user/userguide/opnfv_yardstick_tc009.rst | 89 +++ .../user/userguide/opnfv_yardstick_tc010.rst | 154 +++++ .../user/userguide/opnfv_yardstick_tc011.rst | 123 ++++ .../user/userguide/opnfv_yardstick_tc012.rst | 135 ++++ .../user/userguide/opnfv_yardstick_tc014.rst | 126 ++++ .../user/userguide/opnfv_yardstick_tc019.rst | 134 ++++ .../user/userguide/opnfv_yardstick_tc020.rst | 141 ++++ .../user/userguide/opnfv_yardstick_tc021.rst | 157 +++++ .../user/userguide/opnfv_yardstick_tc024.rst | 76 +++ .../user/userguide/opnfv_yardstick_tc025.rst | 123 ++++ .../user/userguide/opnfv_yardstick_tc027.rst | 95 +++ .../user/userguide/opnfv_yardstick_tc028.rst | 70 ++ .../user/userguide/opnfv_yardstick_tc037.rst | 167 +++++ .../user/userguide/opnfv_yardstick_tc038.rst | 104 +++ .../user/userguide/opnfv_yardstick_tc040.rst | 65 ++ .../user/userguide/opnfv_yardstick_tc042.rst | 87 +++ .../user/userguide/opnfv_yardstick_tc043.rst | 102 +++ .../user/userguide/opnfv_yardstick_tc044.rst | 82 +++ .../user/userguide/opnfv_yardstick_tc045.rst | 139 ++++ .../user/userguide/opnfv_yardstick_tc046.rst | 138 ++++ .../user/userguide/opnfv_yardstick_tc047.rst | 139 ++++ .../user/userguide/opnfv_yardstick_tc048.rst | 139 ++++ .../user/userguide/opnfv_yardstick_tc049.rst | 139 ++++ .../user/userguide/opnfv_yardstick_tc050.rst | 135 ++++ .../user/userguide/opnfv_yardstick_tc051.rst | 117 ++++ .../user/userguide/opnfv_yardstick_tc052.rst | 141 ++++ .../user/userguide/opnfv_yardstick_tc053.rst | 142 ++++ .../user/userguide/opnfv_yardstick_tc054.rst | 125 ++++ .../user/userguide/opnfv_yardstick_tc055.rst | 67 ++ .../user/userguide/opnfv_yardstick_tc061.rst | 88 +++ .../user/userguide/opnfv_yardstick_tc063.rst | 81 +++ .../user/userguide/opnfv_yardstick_tc069.rst | 100 +++ .../user/userguide/opnfv_yardstick_tc070.rst | 110 +++ .../user/userguide/opnfv_yardstick_tc071.rst | 109 +++ .../user/userguide/opnfv_yardstick_tc072.rst | 110 +++ .../user/userguide/opnfv_yardstick_tc073.rst | 81 +++ .../user/userguide/opnfv_yardstick_tc074.rst | 137 ++++ .../user/userguide/opnfv_yardstick_tc075.rst | 60 ++ .../user/userguide/opnfv_yardstick_tc076.rst | 61 ++ docs/testing/user/userguide/references.rst | 60 ++ .../userguide/testcase_description_v2_template.rst | 64 ++ docs/userguide/01-introduction.rst | 79 --- docs/userguide/02-methodology.rst | 195 ------ docs/userguide/03-architecture.rst | 266 -------- docs/userguide/04-vtc-overview.rst | 122 ---- docs/userguide/05-apexlake_installation.rst | 300 --------- docs/userguide/06-apexlake_api.rst | 89 --- docs/userguide/07-nsb-overview.rst | 177 ----- docs/userguide/08-nsb_installation.rst | 253 ------- docs/userguide/09-installation.rst | 401 ----------- docs/userguide/10-yardstick_plugin.rst | 144 ---- docs/userguide/11-result-store-InfluxDB.rst | 86 --- docs/userguide/12-grafana.rst | 119 ---- docs/userguide/13-list-of-tcs.rst | 129 ---- docs/userguide/Yardstick_task_templates.rst | 160 ----- docs/userguide/comp-intro.rst | 37 - docs/userguide/glossary.rst | 65 -- docs/userguide/images/Deployment.png | Bin 17958 -> 0 bytes docs/userguide/images/Grafana_config.png | Bin 143507 -> 0 bytes docs/userguide/images/InfluxDB_store.png | Bin 1623955 -> 0 bytes docs/userguide/images/Logical_view.png | Bin 58840 -> 0 bytes docs/userguide/images/TC002.png | Bin 106382 -> 0 bytes docs/userguide/images/Use_case.png | Bin 105787 -> 0 bytes docs/userguide/images/add.png | Bin 169904 -> 0 bytes docs/userguide/images/login.png | Bin 32761 -> 0 bytes docs/userguide/images/results_visualization.png | Bin 41905 -> 0 bytes docs/userguide/images/test_execution_flow.png | Bin 51473 -> 0 bytes docs/userguide/index.rst | 27 - docs/userguide/opnfv_yardstick_tc001.rst | 133 ---- docs/userguide/opnfv_yardstick_tc002.rst | 126 ---- docs/userguide/opnfv_yardstick_tc004.rst | 110 --- docs/userguide/opnfv_yardstick_tc005.rst | 125 ---- docs/userguide/opnfv_yardstick_tc006.rst | 144 ---- docs/userguide/opnfv_yardstick_tc007.rst | 162 ----- docs/userguide/opnfv_yardstick_tc008.rst | 90 --- docs/userguide/opnfv_yardstick_tc009.rst | 89 --- docs/userguide/opnfv_yardstick_tc010.rst | 154 ----- docs/userguide/opnfv_yardstick_tc011.rst | 123 ---- docs/userguide/opnfv_yardstick_tc012.rst | 135 ---- docs/userguide/opnfv_yardstick_tc014.rst | 126 ---- docs/userguide/opnfv_yardstick_tc019.rst | 134 ---- docs/userguide/opnfv_yardstick_tc020.rst | 141 ---- docs/userguide/opnfv_yardstick_tc021.rst | 157 ----- docs/userguide/opnfv_yardstick_tc024.rst | 76 --- docs/userguide/opnfv_yardstick_tc025.rst | 123 ---- docs/userguide/opnfv_yardstick_tc027.rst | 95 --- docs/userguide/opnfv_yardstick_tc028.rst | 70 -- docs/userguide/opnfv_yardstick_tc037.rst | 167 ----- docs/userguide/opnfv_yardstick_tc038.rst | 104 --- docs/userguide/opnfv_yardstick_tc040.rst | 65 -- docs/userguide/opnfv_yardstick_tc042.rst | 87 --- docs/userguide/opnfv_yardstick_tc043.rst | 102 --- docs/userguide/opnfv_yardstick_tc044.rst | 82 --- docs/userguide/opnfv_yardstick_tc045.rst | 139 ---- docs/userguide/opnfv_yardstick_tc046.rst | 138 ---- docs/userguide/opnfv_yardstick_tc047.rst | 139 ---- docs/userguide/opnfv_yardstick_tc048.rst | 139 ---- docs/userguide/opnfv_yardstick_tc049.rst | 139 ---- docs/userguide/opnfv_yardstick_tc050.rst | 135 ---- docs/userguide/opnfv_yardstick_tc051.rst | 117 ---- docs/userguide/opnfv_yardstick_tc052.rst | 141 ---- docs/userguide/opnfv_yardstick_tc053.rst | 142 ---- docs/userguide/opnfv_yardstick_tc054.rst | 125 ---- docs/userguide/opnfv_yardstick_tc055.rst | 67 -- docs/userguide/opnfv_yardstick_tc061.rst | 88 --- docs/userguide/opnfv_yardstick_tc063.rst | 81 --- docs/userguide/opnfv_yardstick_tc069.rst | 100 --- docs/userguide/opnfv_yardstick_tc070.rst | 110 --- docs/userguide/opnfv_yardstick_tc071.rst | 109 --- docs/userguide/opnfv_yardstick_tc072.rst | 110 --- docs/userguide/opnfv_yardstick_tc073.rst | 81 --- docs/userguide/opnfv_yardstick_tc074.rst | 137 ---- docs/userguide/opnfv_yardstick_tc075.rst | 60 -- docs/userguide/opnfv_yardstick_tc076.rst | 61 -- docs/userguide/references.rst | 60 -- .../userguide/testcase_description_v2_template.rst | 64 -- 184 files changed, 12198 insertions(+), 12198 deletions(-) delete mode 100644 docs/release/index.rst delete mode 100644 docs/release/release-notes.rst create mode 100644 docs/release/release-notes/index.rst create mode 100644 docs/release/release-notes/release-notes.rst create mode 100644 docs/release/results/index.rst create mode 100644 docs/release/results/os-nosdn-kvm-ha.rst create mode 100644 docs/release/results/os-nosdn-nofeature-ha.rst create mode 100644 docs/release/results/os-nosdn-nofeature-noha.rst create mode 100644 docs/release/results/os-odl_l2-bgpvpn-ha.rst create mode 100644 docs/release/results/os-odl_l2-nofeature-ha.rst create mode 100644 docs/release/results/os-odl_l2-sfc-ha.rst create mode 100644 docs/release/results/os-onos-nofeature-ha.rst create mode 100644 docs/release/results/os-onos-sfc-ha.rst create mode 100644 docs/release/results/overview.rst create mode 100644 docs/release/results/results.rst create mode 100644 docs/release/results/yardstick-opnfv-ha.rst create mode 100644 docs/release/results/yardstick-opnfv-kvm.rst create mode 100644 docs/release/results/yardstick-opnfv-parser.rst create mode 100644 docs/release/results/yardstick-opnfv-vtc.rst delete mode 100644 docs/results/index.rst delete mode 100644 docs/results/os-nosdn-kvm-ha.rst delete mode 100644 docs/results/os-nosdn-nofeature-ha.rst delete mode 100644 docs/results/os-nosdn-nofeature-noha.rst delete mode 100644 docs/results/os-odl_l2-bgpvpn-ha.rst delete mode 100644 docs/results/os-odl_l2-nofeature-ha.rst delete mode 100644 docs/results/os-odl_l2-sfc-ha.rst delete mode 100644 docs/results/os-onos-nofeature-ha.rst delete mode 100644 docs/results/os-onos-sfc-ha.rst delete mode 100644 docs/results/overview.rst delete mode 100644 docs/results/results.rst delete mode 100644 docs/results/yardstick-opnfv-ha.rst delete mode 100644 docs/results/yardstick-opnfv-kvm.rst delete mode 100644 docs/results/yardstick-opnfv-parser.rst delete mode 100644 docs/results/yardstick-opnfv-vtc.rst create mode 100755 docs/testing/user/userguide/01-introduction.rst create mode 100644 docs/testing/user/userguide/02-methodology.rst create mode 100755 docs/testing/user/userguide/03-architecture.rst create mode 100644 docs/testing/user/userguide/04-vtc-overview.rst create mode 100644 docs/testing/user/userguide/05-apexlake_installation.rst create mode 100644 docs/testing/user/userguide/06-apexlake_api.rst create mode 100644 docs/testing/user/userguide/07-nsb-overview.rst create mode 100644 docs/testing/user/userguide/08-nsb_installation.rst create mode 100644 docs/testing/user/userguide/09-installation.rst create mode 100644 docs/testing/user/userguide/10-yardstick_plugin.rst create mode 100644 docs/testing/user/userguide/11-result-store-InfluxDB.rst create mode 100644 docs/testing/user/userguide/12-grafana.rst create mode 100644 docs/testing/user/userguide/13-list-of-tcs.rst create mode 100755 docs/testing/user/userguide/Yardstick_task_templates.rst create mode 100644 docs/testing/user/userguide/comp-intro.rst create mode 100644 docs/testing/user/userguide/glossary.rst create mode 100755 docs/testing/user/userguide/images/Deployment.png create mode 100644 docs/testing/user/userguide/images/Grafana_config.png create mode 100644 docs/testing/user/userguide/images/InfluxDB_store.png create mode 100644 docs/testing/user/userguide/images/Logical_view.png create mode 100644 docs/testing/user/userguide/images/TC002.png create mode 100644 docs/testing/user/userguide/images/Use_case.png create mode 100644 docs/testing/user/userguide/images/add.png create mode 100644 docs/testing/user/userguide/images/login.png create mode 100644 docs/testing/user/userguide/images/results_visualization.png create mode 100644 docs/testing/user/userguide/images/test_execution_flow.png create mode 100644 docs/testing/user/userguide/index.rst create mode 100644 docs/testing/user/userguide/opnfv_yardstick_tc001.rst create mode 100644 docs/testing/user/userguide/opnfv_yardstick_tc002.rst create mode 100644 docs/testing/user/userguide/opnfv_yardstick_tc004.rst create mode 100644 docs/testing/user/userguide/opnfv_yardstick_tc005.rst create mode 100644 docs/testing/user/userguide/opnfv_yardstick_tc006.rst create mode 100644 docs/testing/user/userguide/opnfv_yardstick_tc007.rst create mode 100644 docs/testing/user/userguide/opnfv_yardstick_tc008.rst create mode 100644 docs/testing/user/userguide/opnfv_yardstick_tc009.rst create mode 100644 docs/testing/user/userguide/opnfv_yardstick_tc010.rst create mode 100644 docs/testing/user/userguide/opnfv_yardstick_tc011.rst create mode 100644 docs/testing/user/userguide/opnfv_yardstick_tc012.rst create mode 100644 docs/testing/user/userguide/opnfv_yardstick_tc014.rst create mode 100644 docs/testing/user/userguide/opnfv_yardstick_tc019.rst create mode 100644 docs/testing/user/userguide/opnfv_yardstick_tc020.rst create mode 100644 docs/testing/user/userguide/opnfv_yardstick_tc021.rst create mode 100644 docs/testing/user/userguide/opnfv_yardstick_tc024.rst create mode 100644 docs/testing/user/userguide/opnfv_yardstick_tc025.rst create mode 100644 docs/testing/user/userguide/opnfv_yardstick_tc027.rst create mode 100644 docs/testing/user/userguide/opnfv_yardstick_tc028.rst create mode 100644 docs/testing/user/userguide/opnfv_yardstick_tc037.rst create mode 100644 docs/testing/user/userguide/opnfv_yardstick_tc038.rst create mode 100644 docs/testing/user/userguide/opnfv_yardstick_tc040.rst create mode 100644 docs/testing/user/userguide/opnfv_yardstick_tc042.rst create mode 100644 docs/testing/user/userguide/opnfv_yardstick_tc043.rst create mode 100644 docs/testing/user/userguide/opnfv_yardstick_tc044.rst create mode 100644 docs/testing/user/userguide/opnfv_yardstick_tc045.rst create mode 100644 docs/testing/user/userguide/opnfv_yardstick_tc046.rst create mode 100644 docs/testing/user/userguide/opnfv_yardstick_tc047.rst create mode 100644 docs/testing/user/userguide/opnfv_yardstick_tc048.rst create mode 100644 docs/testing/user/userguide/opnfv_yardstick_tc049.rst create mode 100644 docs/testing/user/userguide/opnfv_yardstick_tc050.rst create mode 100644 docs/testing/user/userguide/opnfv_yardstick_tc051.rst create mode 100644 docs/testing/user/userguide/opnfv_yardstick_tc052.rst create mode 100644 docs/testing/user/userguide/opnfv_yardstick_tc053.rst create mode 100644 docs/testing/user/userguide/opnfv_yardstick_tc054.rst create mode 100644 docs/testing/user/userguide/opnfv_yardstick_tc055.rst create mode 100644 docs/testing/user/userguide/opnfv_yardstick_tc061.rst create mode 100644 docs/testing/user/userguide/opnfv_yardstick_tc063.rst create mode 100644 docs/testing/user/userguide/opnfv_yardstick_tc069.rst create mode 100644 docs/testing/user/userguide/opnfv_yardstick_tc070.rst create mode 100644 docs/testing/user/userguide/opnfv_yardstick_tc071.rst create mode 100644 docs/testing/user/userguide/opnfv_yardstick_tc072.rst create mode 100644 docs/testing/user/userguide/opnfv_yardstick_tc073.rst create mode 100644 docs/testing/user/userguide/opnfv_yardstick_tc074.rst create mode 100644 docs/testing/user/userguide/opnfv_yardstick_tc075.rst create mode 100644 docs/testing/user/userguide/opnfv_yardstick_tc076.rst create mode 100644 docs/testing/user/userguide/references.rst create mode 100644 docs/testing/user/userguide/testcase_description_v2_template.rst delete mode 100755 docs/userguide/01-introduction.rst delete mode 100644 docs/userguide/02-methodology.rst delete mode 100755 docs/userguide/03-architecture.rst delete mode 100644 docs/userguide/04-vtc-overview.rst delete mode 100644 docs/userguide/05-apexlake_installation.rst delete mode 100644 docs/userguide/06-apexlake_api.rst delete mode 100644 docs/userguide/07-nsb-overview.rst delete mode 100644 docs/userguide/08-nsb_installation.rst delete mode 100644 docs/userguide/09-installation.rst delete mode 100644 docs/userguide/10-yardstick_plugin.rst delete mode 100644 docs/userguide/11-result-store-InfluxDB.rst delete mode 100644 docs/userguide/12-grafana.rst delete mode 100644 docs/userguide/13-list-of-tcs.rst delete mode 100755 docs/userguide/Yardstick_task_templates.rst delete mode 100644 docs/userguide/comp-intro.rst delete mode 100644 docs/userguide/glossary.rst delete mode 100755 docs/userguide/images/Deployment.png delete mode 100644 docs/userguide/images/Grafana_config.png delete mode 100644 docs/userguide/images/InfluxDB_store.png delete mode 100644 docs/userguide/images/Logical_view.png delete mode 100644 docs/userguide/images/TC002.png delete mode 100644 docs/userguide/images/Use_case.png delete mode 100644 docs/userguide/images/add.png delete mode 100644 docs/userguide/images/login.png delete mode 100644 docs/userguide/images/results_visualization.png delete mode 100644 docs/userguide/images/test_execution_flow.png delete mode 100644 docs/userguide/index.rst delete mode 100644 docs/userguide/opnfv_yardstick_tc001.rst delete mode 100644 docs/userguide/opnfv_yardstick_tc002.rst delete mode 100644 docs/userguide/opnfv_yardstick_tc004.rst delete mode 100644 docs/userguide/opnfv_yardstick_tc005.rst delete mode 100644 docs/userguide/opnfv_yardstick_tc006.rst delete mode 100644 docs/userguide/opnfv_yardstick_tc007.rst delete mode 100644 docs/userguide/opnfv_yardstick_tc008.rst delete mode 100644 docs/userguide/opnfv_yardstick_tc009.rst delete mode 100644 docs/userguide/opnfv_yardstick_tc010.rst delete mode 100644 docs/userguide/opnfv_yardstick_tc011.rst delete mode 100644 docs/userguide/opnfv_yardstick_tc012.rst delete mode 100644 docs/userguide/opnfv_yardstick_tc014.rst delete mode 100644 docs/userguide/opnfv_yardstick_tc019.rst delete mode 100644 docs/userguide/opnfv_yardstick_tc020.rst delete mode 100644 docs/userguide/opnfv_yardstick_tc021.rst delete mode 100644 docs/userguide/opnfv_yardstick_tc024.rst delete mode 100644 docs/userguide/opnfv_yardstick_tc025.rst delete mode 100644 docs/userguide/opnfv_yardstick_tc027.rst delete mode 100644 docs/userguide/opnfv_yardstick_tc028.rst delete mode 100644 docs/userguide/opnfv_yardstick_tc037.rst delete mode 100644 docs/userguide/opnfv_yardstick_tc038.rst delete mode 100644 docs/userguide/opnfv_yardstick_tc040.rst delete mode 100644 docs/userguide/opnfv_yardstick_tc042.rst delete mode 100644 docs/userguide/opnfv_yardstick_tc043.rst delete mode 100644 docs/userguide/opnfv_yardstick_tc044.rst delete mode 100644 docs/userguide/opnfv_yardstick_tc045.rst delete mode 100644 docs/userguide/opnfv_yardstick_tc046.rst delete mode 100644 docs/userguide/opnfv_yardstick_tc047.rst delete mode 100644 docs/userguide/opnfv_yardstick_tc048.rst delete mode 100644 docs/userguide/opnfv_yardstick_tc049.rst delete mode 100644 docs/userguide/opnfv_yardstick_tc050.rst delete mode 100644 docs/userguide/opnfv_yardstick_tc051.rst delete mode 100644 docs/userguide/opnfv_yardstick_tc052.rst delete mode 100644 docs/userguide/opnfv_yardstick_tc053.rst delete mode 100644 docs/userguide/opnfv_yardstick_tc054.rst delete mode 100644 docs/userguide/opnfv_yardstick_tc055.rst delete mode 100644 docs/userguide/opnfv_yardstick_tc061.rst delete mode 100644 docs/userguide/opnfv_yardstick_tc063.rst delete mode 100644 docs/userguide/opnfv_yardstick_tc069.rst delete mode 100644 docs/userguide/opnfv_yardstick_tc070.rst delete mode 100644 docs/userguide/opnfv_yardstick_tc071.rst delete mode 100644 docs/userguide/opnfv_yardstick_tc072.rst delete mode 100644 docs/userguide/opnfv_yardstick_tc073.rst delete mode 100644 docs/userguide/opnfv_yardstick_tc074.rst delete mode 100644 docs/userguide/opnfv_yardstick_tc075.rst delete mode 100644 docs/userguide/opnfv_yardstick_tc076.rst delete mode 100644 docs/userguide/references.rst delete mode 100644 docs/userguide/testcase_description_v2_template.rst diff --git a/docs/release/index.rst b/docs/release/index.rst deleted file mode 100644 index c9cadc539..000000000 --- a/docs/release/index.rst +++ /dev/null @@ -1,13 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Ericsson AB and others. - -====================== -Yardstick Release Note -====================== - -.. toctree:: - :maxdepth: 2 - - release-notes diff --git a/docs/release/release-notes.rst b/docs/release/release-notes.rst deleted file mode 100644 index 8df0776df..000000000 --- a/docs/release/release-notes.rst +++ /dev/null @@ -1,693 +0,0 @@ -======= -License -======= - -OPNFV Colorado release note for Yardstick Docs -are licensed under a Creative Commons Attribution 4.0 International License. -You should have received a copy of the license along with this. -If not, see . - -The *Yardstick framework*, the *Yardstick test cases* and the *ApexLake* -experimental framework are opensource software, licensed under the terms of the -Apache License, Version 2.0. - -========================================= -OPNFV Colorado Release Note for Yardstick -========================================= - -.. toctree:: - :maxdepth: 2 - -.. _Yardstick: https://wiki.opnfv.org/yardstick - -.. _Dashboard: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main - -.. _NFV-TST001: http://www.etsi.org/deliver/etsi_gs/NFV-TST/001_099/001/01.01.01_60/gs_NFV-TST001v010101p.pdf - - -Abstract -======== - -This document describes the release note of Yardstick project. - - -Version History -=============== - -+----------------+--------------------+---------------------------------+ -| *Date* | *Version* | *Comment* | -| | | | -+----------------+--------------------+---------------------------------+ -| Dec 5th, 2016 | 3.0 | Yardstick for Colorado release | -| | | | -+----------------+--------------------+---------------------------------+ -| Oct 27th, 2016 | 2.0 | Yardstick for Colorado release | -| | | | -+----------------+--------------------+---------------------------------+ -| Aug 22nd, 2016 | 1.0 | Yardstick for Colorado release | -| | | | -+----------------+--------------------+---------------------------------+ - - -Important Notes -=============== - -The software delivered in the OPNFV Yardstick_ Project, comprising the -*Yardstick framework*, the *Yardstick test cases* and the experimental -framework *Apex Lake* is a realization of the methodology in ETSI-ISG -NFV-TST001_. - -The *Yardstick* framework is *installer*, *infrastructure* and *application* -independent. - - -OPNFV Colorado Release -====================== - -This Colorado release provides *Yardstick* as a framework for NFVI testing -and OPNFV feature testing, automated in the OPNFV CI pipeline, including: - -* Documentation generated with Sphinx - - * User Guide - - * Code Documentation - - * Release notes (this document) - - * Results - -* Automated Yardstick test suite (daily, weekly) - - * Jenkins Jobs for OPNFV community labs - -* Automated Yardstick test results visualization - - * Dashboard_ using Grafana (user:opnfv/password: opnfv), influxDB is used as - backend - -* Yardstick framework source code - -* Yardstick test cases yaml files - -* Yardstick pliug-in configration yaml files, plug-in install/remove scripts - -For Colorado release, the *Yardstick framework* is used for the following -testing: - -* OPNFV platform testing - generic test cases to measure the categories: - - * Compute - - * Network - - * Storage - -* Test cases for the following OPNFV Projects: - - * High Availability - - * IPv6 - - * KVM - - * Parser - - * StorPerf - - * VSperf - - * virtual Traffic Classifier - -The *Yardstick framework* is developed in the OPNFV community, by the -Yardstick_ team. The *virtual Traffic Classifier* is a part of the Yardstick -Project. - -.. note:: The test case description template used for the Yardstick test cases - is based on the document ETSI-ISG NFV-TST001_; the results report template - used for the Yardstick results is based on the IEEE Std 829-2008. - - -Release Data -============ - -+--------------------------------------+--------------------------------------+ -| **Project** | Yardstick | -| | | -+--------------------------------------+--------------------------------------+ -| **Repo/tag** | yardstick/colorado.3.0 | -| | | -+--------------------------------------+--------------------------------------+ -| **Yardstick Docker image tag** | colorado.3.0 | -| | | -+--------------------------------------+--------------------------------------+ -| **Release designation** | Colorado | -| | | -+--------------------------------------+--------------------------------------+ -| **Release date** | December 5th, 2016 | -| | | -+--------------------------------------+--------------------------------------+ -| **Purpose of the delivery** | OPNFV Colorado release 3.0 | -| | | -+--------------------------------------+--------------------------------------+ - - -Deliverables -============ - -Documents ---------- - - - User Guide: http://artifacts.opnfv.org/yardstick/colorado/docs/userguide/index.html - - - Test Results: http://artifacts.opnfv.org/yardstick/colorado/docs/results/overview.html - - -Software Deliverables ---------------------- - -**Yardstick framework source code ** - -+--------------------------------------+--------------------------------------+ -| **Project** | Yardstick | -| | | -+--------------------------------------+--------------------------------------+ -| **Repo/tag** | yardstick/colorado.3.0 | -| | | -+--------------------------------------+--------------------------------------+ -| **Yardstick Docker image tag** | colorado.3.0 | -| | | -+--------------------------------------+--------------------------------------+ -| **Release designation** | Colorado | -| | | -+--------------------------------------+--------------------------------------+ -| **Release date** | December 5th, 2016 | -| | | -+--------------------------------------+--------------------------------------+ -| **Purpose of the delivery** | OPNFV Colorado release | -| | | -+--------------------------------------+--------------------------------------+ - - -**Contexts** - -+---------------------+-------------------------------------------------------+ -| **Context** | **Description** | -| | | -+---------------------+-------------------------------------------------------+ -| *Heat* | Models orchestration using OpenStack Heat | -| | | -+---------------------+-------------------------------------------------------+ -| *Node* | Models Baremetal, Controller, Compute | -| | | -+---------------------+-------------------------------------------------------+ - - -**Runners** - -+---------------------+-------------------------------------------------------+ -| **Runner** | **Description** | -| | | -+---------------------+-------------------------------------------------------+ -| *Arithmetic* | Steps every run arithmetically according to specified | -| | input value | -| | | -+---------------------+-------------------------------------------------------+ -| *Duration* | Runs for a specified period of time | -| | | -+---------------------+-------------------------------------------------------+ -| *Iteration* | Runs for a specified number of iterations | -| | | -+---------------------+-------------------------------------------------------+ -| *Sequence* | Selects input value to a scenario from an input file | -| | and runs all entries sequentially | -| | | -+---------------------+-------------------------------------------------------+ - - -**Scenarios** - -+---------------------+-------------------------------------------------------+ -| **Category** | **Delivered** | -| | | -+---------------------+-------------------------------------------------------+ -| *Availability* | Attacker: | -| | | -| | * baremetal, process | -| | | -| | HA tools: | -| | | -| | * check host, openstack, process, service | -| | * kill process | -| | * start/stop service | -| | | -| | Monitor: | -| | | -| | * command, process | -| | | -+---------------------+-------------------------------------------------------+ -| *Compute* | * cpuload | -| | | -| | * cyclictest | -| | | -| | * lmbench | -| | | -| | * lmbench_cache | -| | | -| | * perf | -| | | -| | * unixbench | -| | | -| | * ramspeed | -| | | -| | * cachestat | -| | | -| | * memeoryload | -| | | -| | * computecapacity | -| | | -+---------------------+-------------------------------------------------------+ -| *Networking* | * iperf3 | -| | | -| | * netperf | -| | | -| | * netperf_node | -| | | -| | * ping | -| | | -| | * ping6 | -| | | -| | * pktgen | -| | | -| | * sfc | -| | | -| | * sfc with tacker | -| | | -| | * vtc instantion validation | -| | | -| | * vtc instantion validation with noisy neighbors | -| | | -| | * vtc throughput | -| | | -| | * vtc throughput in the presence of noisy neighbors | -| | | -| | * networkcapacity | -| | | -| | * netutilization | -| | | -+---------------------+-------------------------------------------------------+ -| *Parser* | Tosca2Heat | -| | | -+---------------------+-------------------------------------------------------+ -| *Storage* | fio | -| | | -| | storagecapacity | -| | | -+---------------------+-------------------------------------------------------+ -| *StorPerf* | storperf | -| | | -+---------------------+-------------------------------------------------------+ - - -**API to Other Frameworks** - -+---------------------+-------------------------------------------------------+ -| **Framework** | **Description** | -| | | -+---------------------+-------------------------------------------------------+ -| *ApexLake* | Experimental framework that enables the user to | -| | validate NFVI from the perspective of a VNF. | -| | A virtual Traffic Classifier is utilized as VNF. | -| | Enables experiments with SR-IOV on Compute Node. | -| | | -+---------------------+-------------------------------------------------------+ - - -**Test Results Output** - -+-----------------------------+-----------------------------------------------+ -| **Dispatcher** | **Description** | -| | | -+-----------------------------+-----------------------------------------------+ -| file | Log to a file. | -| | | -+-----------------------------+-----------------------------------------------+ -| http | Post data to html. | -| | | -+-----------------------------+-----------------------------------------------+ -| influxdb | Post data to influxDB. | -| | | -+-----------------------------+-----------------------------------------------+ - - -Delivered Test cases --------------------- - -* Generic NFVI test cases - - * OPNFV_YARDSTICK_TCOO1 - NW Performance - - * OPNFV_YARDSTICK_TCOO2 - NW Latency - - * OPNFV_YARDSTICK_TCOO4 - Cache Utilization - - * OPNFV_YARDSTICK_TCOO5 - Storage Performance - - * OPNFV_YARDSTICK_TCOO8 - Packet Loss Extended Test - - * OPNFV_YARDSTICK_TCOO9 - Packet Loss - - * OPNFV_YARDSTICK_TCO10 - Memory Latency - - * OPNFV_YARDSTICK_TCO11 - Packet Delay Variation Between VMs - - * OPNFV_YARDSTICK_TCO12 - Memory Bandwidth - - * OPNFV_YARDSTICK_TCO14 - Processing Speed - - * OPNFV_YARDSTICK_TCO24 - CPU Load - - * OPNFV_YARDSTICK_TCO37 - Latency, CPU Load, Throughput, Packet Loss - - * OPNFV_YARDSTICK_TCO38 - Latency, CPU Load, Throughput, Packet Loss Extended - Test - - * OPNFV_YARDSTICK_TCO42 - Network Performance - - * OPNFV_YARDSTICK_TCO43 - Network Latency Between NFVI Nodes - - * OPNFV_YARDSTICK_TCO44 - Memory Utilization - - * OPNFV_YARDSTICK_TCO55 - Compute Capacity - - * OPNFV_YARDSTICK_TCO61 - Network Utilization - - * OPNFV_YARDSTICK_TCO63 - Storage Capacity - - * OPNFV_YARDSTICK_TCO69 - Memory Bandwidth - - * OPNFV_YARDSTICK_TCO70 - Latency, Memory Utilization, Throughput, Packet - Loss - - * OPNFV_YARDSTICK_TCO71 - Latency, Cache Utilization, Throughput, Packet Loss - - * OPNFV_YARDSTICK_TCO72 - Latency, Network Utilization, Throughput, Packet - Loss - - * OPNFV_YARDSTICK_TC073 - Network Latency and Throughput Between Nodes - - * OPNFV_YARDSTICK_TCO75 - Network Capacity and Scale - -* Test Cases for OPNFV HA Project: - - * OPNFV_YARDSTICK_TCO19 - HA: Control node Openstack service down - - * OPNFV_YARDSTICK_TC025 - HA: OpenStacK Controller Node abnormally down - - * OPNFV_YARDSTICK_TCO45 - HA: Control node Openstack service down - neutron - server - - * OPNFV_YARDSTICK_TC046 - HA: Control node Openstack service down - keystone - - * OPNFV_YARDSTICK_TCO47 - HA: Control node Openstack service down - glance - api - - * OPNFV_YARDSTICK_TC048 - HA: Control node Openstack service down - cinder - api - - * OPNFV_YARDSTICK_TCO49 - HA: Control node Openstack service down - swift - proxy - - * OPNFV_YARDSTICK_TC050 - HA: OpenStack Controller Node Network High - Availability - - * OPNFV_YARDSTICK_TCO51 - HA: OpenStack Controller Node CPU Overload High - Availability - - * OPNFV_YARDSTICK_TC052 - HA: OpenStack Controller Node Disk I/O Block High - Availability - - * OPNFV_YARDSTICK_TCO53 - HA: OpenStack Controller Load Balance Service High - Availability - - * OPNFV_YARDSTICK_TC054 - HA: OpenStack Virtual IP High Availability - -* Test Case for OPNFV IPv6 Project: - - * OPNFV_YARDSTICK_TCO27 - IPv6 connectivity - -* Test Case for OPNFV KVM Project: - - * OPNFV_YARDSTICK_TCO28 - KVM Latency measurements - -* Test Case for OPNFV Parser Project: - - * OPNFV_YARDSTICK_TCO40 - Verify Parser Yang-to-Tosca - -* Test Case for OPNFV StorPerf Project: - - * OPNFV_YARDSTICK_TCO74 - Storperf - -* Test Cases for Virtual Traffic Classifier: - - * OPNFV_YARDSTICK_TC006 - Virtual Traffic Classifier Data Plane Throughput -Benchmarking Test - - * OPNFV_YARDSTICK_TC007 - Virtual Traffic Classifier Data Plane Throughput -Benchmarking in presence of noisy neighbors Test - - * OPNFV_YARDSTICK_TC020 - Virtual Traffic Classifier Instantiation Test - - * OPNFV_YARDSTICK_TC021 - Virtual Traffic Classifier Instantiation in -presence of noisy neighbors Test - - -Version Change -============== - -Module Version Changes ----------------------- - -This is the second tracked release of Yardstick. It is based on following -upstream versions: - -- ONOS Goldeneye - -- OpenStack Mitaka - -- OpenDaylight Beryllium - - -Document Version Changes ------------------------- - -This is the second tracked version of the Yardstick framework in OPNFV. -It includes the following documentation updates: - -- Yardstick User Guide: added yardstick plugin chapter; added Store Other -Project's Test Results in InfluxDB chapter; Refine yardstick instantion chapter. - -- Yardstick Code Documentation: no changes - -- Yardstick Release Notes for Yardstick: this document - -- Test Results report for Colorado testing with Yardstick: updated listed of -verified scenarios and limitations - - -Feature additions ------------------ - - Yardstick plugin - - Yardstick reporting - - StorPerf Integration - - -Scenario Matrix -=============== - -For Colorado 3.0, Yardstick was tested on the following scenarios: - -+-------------------------+---------+---------+---------+---------+ -| Scenario | Apex | Compass | Fuel | Joid | -+=========================+=========+=========+=========+=========+ -| os-nosdn-nofeature-noha | | | | X | -+-------------------------+---------+---------+---------+---------+ -| os-nosdn-nofeature-ha | X | | X | X | -+-------------------------+---------+---------+---------+---------+ -| os-odl_l2-nofeature-ha | X | X | X | X | -+-------------------------+---------+---------+---------+---------+ -| os-odl_l2-nofeature-noha| | X | | | -+-------------------------+---------+---------+---------+---------+ -| os-odl_l3-nofeature-ha | X | | X | | -+-------------------------+---------+---------+---------+---------+ -| os-odl_l3-nofeature-ha | | X | | | -+-------------------------+---------+---------+---------+---------+ -| os-onos-sfc-ha | X | | X | X | -+-------------------------+---------+---------+---------+---------+ -| os-onos-nofeature-ha | X | | X | X | -+-------------------------+---------+---------+---------+---------+ -| os-onos-nofeature-noha | | X | | | -+-------------------------+---------+---------+---------+---------+ -| os-odl_l2-sfc-ha | | | X | | -+-------------------------+---------+---------+---------+---------+ -| os-odl_l2-sfc-noha | X | X | | | -+-------------------------+---------+---------+---------+---------+ -| os-odl_l2-bgpvpn-ha | X | | X | | -+-------------------------+---------+---------+---------+---------+ -| os-odl_l2-bgpvpn-noha | | X | | | -+-------------------------+---------+---------+---------+---------+ -| os-nosdn-kvm-ha | | | X | | -+-------------------------+---------+---------+---------+---------+ -| os-nosdn-kvm-noha | | X | | | -+-------------------------+---------+---------+---------+---------+ -| os-nosdn-ovs-ha | | | X | | -+-------------------------+---------+---------+---------+---------+ -| os-nosdn-ovs-noha | X | | X | | -+-------------------------+---------+---------+---------+---------+ -| os-ocl-nofeature-ha | | | | | -+-------------------------+---------+---------+---------+---------+ -| os-nosdn-lxd-ha | | | | X | -+-------------------------+---------+---------+---------+---------+ -| os-nosdn-lxd-noha | | | | X | -+-------------------------+---------+---------+---------+---------+ -| os-odl_l2-fdio-noha | X | | | | -+-------------------------+---------+---------+---------+---------+ - - -Test results -============ - -Test results are available in: - - - jenkins logs on CI: https://build.opnfv.org/ci/view/yardstick/ - -The reporting pages can be found at: - - * apex: http://testresults.opnfv.org/reporting/yardstick/release/colorado/index-status-apex.html - * compass: http://testresults.opnfv.org/reporting/yardstick/release/colorado/index-status-compass.html - * fuel: http://testresults.opnfv.org/reporting/yardstick/release/colorado/index-status-fuel.html - * joid: http://testresults.opnfv.org/reporting/yardstick/release/colorado/index-status-joid.html - -You can get additional details through test logs on http://artifacts.opnfv.org/. -As no search engine is available on the OPNFV artifact web site you must -retrieve the pod identifier on which the tests have been executed (see -field pod in any of the results) then click on the selected POD and look -for the date of the test you are interested in. - - -Known Issues/Faults ------------- - - Floating IP not supported in bgpvpn scenario - - Floating IP not supported in apex-os-odl_l3-nofeature-ha scenario - -.. note:: The faults not related to *Yardstick* framework, addressing scenarios - which were not fully verified, are listed in the OPNFV installer's release - notes. - - -Corrected Faults ----------------- - -Colorado.3.0: - -+----------------------------+------------------------------------------------+ -| **JIRA REFERENCE** | **SLOGAN** | -| | | -+----------------------------+------------------------------------------------+ -| JIRA: YARDSTICK-239 | Define process for working with Yardstick | -| | Grafana dashboard. | -| | | -+----------------------------+------------------------------------------------+ -| JIRA: YARDSTICK-373 | Add os-odl_l2-fdio-ha scenario support. | -| | | -+----------------------------+------------------------------------------------+ - - -Colorado.2.0: - -+----------------------------+------------------------------------------------+ -| **JIRA REFERENCE** | **SLOGAN** | -| | | -+----------------------------+------------------------------------------------+ -| JIRA: YARDSTICK-325 | Provide raw format yardstick vm image for | -| | nova-lxd scenario. | -| | | -+----------------------------+------------------------------------------------+ -| JIRA: YARDSTICK-358 | tc027 ipv6 test case to de-coupling to the | -| | installers. | -| | | -+----------------------------+------------------------------------------------+ -| JIRA: YARDSTICK-359 | ipv6 testcase disable port-security on | -| | vRouter. | -| | | -+----------------------------+------------------------------------------------+ -| JIRA: YARDSTICK-363 | ipv6 testcase to support fuel. | -| | | -+----------------------------+------------------------------------------------+ -| JIRA: YARDSTICK-367 | Add d3 graph presentation to yardstick | -| | reporting. | -| | | -+----------------------------+------------------------------------------------+ -| JIRA: YARDSTICK-371 | Provide raw format yardstick vm image for | -| | nova-lxd scenario. | -| | | -+----------------------------+------------------------------------------------+ -| JIRA: YARDSTICK-372 | cannot find yardstick-img-dpdk-modify and | -| | yardstick-img-lxd-modify in environment | -| | varibales. | -| | | -+----------------------------+------------------------------------------------+ - - -Colorado 3.0 known restrictions/issues -================================== -+-----------+-----------+----------------------------------------------+ -| Installer | Scenario | Issue | -+===========+===========+==============================================+ -| any | *-bgpvpn | Floating ips not supported. Some Test cases | -| | | related to floating ips are excluded. | -+-----------+-----------+----------------------------------------------+ -| any | odl_l3-* | Some test cases related to using floating IP | -| | | addresses fail because of a known ODL bug. | -| | | https://jira.opnfv.org/browse/APEX-112 | -+-----------+-----------+----------------------------------------------+ - - -Open JIRA tickets -================= - - -Useful links -============ - - - wiki project page: https://wiki.opnfv.org/display/yardstick/Yardstick - - - wiki Yardstick Colorado release planing page: https://wiki.opnfv.org/display/yardstick/Yardstick+Colorado+Release+Planning - - - wiki Yardstick Colorado release jira page: https://wiki.opnfv.org/display/yardstick/Jira+Yardstick-Colorado - - - Yardstick repo: https://git.opnfv.org/cgit/yardstick - - - Yardstick CI dashboard: https://build.opnfv.org/ci/view/yardstick - - - Yardstick grafana dashboard: http://testresults.opnfv.org/grafana/ - - - Yardstick IRC chanel: #opnfv-yardstick - -.. _`YARDSTICK-239` : https://jira.opnfv.org/browse/YARDSTICK-239 - -.. _`YARDSTICK-325` : https://jira.opnfv.org/browse/YARDSTICK-325 - -.. _`YARDSTICK-358` : https://jira.opnfv.org/browse/YARDSTICK-358 - -.. _`YARDSTICK-359` : https://jira.opnfv.org/browse/YARDSTICK-359 - -.. _`YARDSTICK-363` : https://jira.opnfv.org/browse/YARDSTICK-363 - -.. _`YARDSTICK-367` : https://jira.opnfv.org/browse/YARDSTICK-367 - -.. _`YARDSTICK-371` : https://jira.opnfv.org/browse/YARDSTICK-371 - -.. _`YARDSTICK-372` : https://jira.opnfv.org/browse/YARDSTICK-372 - -.. _`YARDSTICK-373` : https://jira.opnfv.org/browse/YARDSTICK-373 diff --git a/docs/release/release-notes/index.rst b/docs/release/release-notes/index.rst new file mode 100644 index 000000000..c9cadc539 --- /dev/null +++ b/docs/release/release-notes/index.rst @@ -0,0 +1,13 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Ericsson AB and others. + +====================== +Yardstick Release Note +====================== + +.. toctree:: + :maxdepth: 2 + + release-notes diff --git a/docs/release/release-notes/release-notes.rst b/docs/release/release-notes/release-notes.rst new file mode 100644 index 000000000..8df0776df --- /dev/null +++ b/docs/release/release-notes/release-notes.rst @@ -0,0 +1,693 @@ +======= +License +======= + +OPNFV Colorado release note for Yardstick Docs +are licensed under a Creative Commons Attribution 4.0 International License. +You should have received a copy of the license along with this. +If not, see . + +The *Yardstick framework*, the *Yardstick test cases* and the *ApexLake* +experimental framework are opensource software, licensed under the terms of the +Apache License, Version 2.0. + +========================================= +OPNFV Colorado Release Note for Yardstick +========================================= + +.. toctree:: + :maxdepth: 2 + +.. _Yardstick: https://wiki.opnfv.org/yardstick + +.. _Dashboard: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main + +.. _NFV-TST001: http://www.etsi.org/deliver/etsi_gs/NFV-TST/001_099/001/01.01.01_60/gs_NFV-TST001v010101p.pdf + + +Abstract +======== + +This document describes the release note of Yardstick project. + + +Version History +=============== + ++----------------+--------------------+---------------------------------+ +| *Date* | *Version* | *Comment* | +| | | | ++----------------+--------------------+---------------------------------+ +| Dec 5th, 2016 | 3.0 | Yardstick for Colorado release | +| | | | ++----------------+--------------------+---------------------------------+ +| Oct 27th, 2016 | 2.0 | Yardstick for Colorado release | +| | | | ++----------------+--------------------+---------------------------------+ +| Aug 22nd, 2016 | 1.0 | Yardstick for Colorado release | +| | | | ++----------------+--------------------+---------------------------------+ + + +Important Notes +=============== + +The software delivered in the OPNFV Yardstick_ Project, comprising the +*Yardstick framework*, the *Yardstick test cases* and the experimental +framework *Apex Lake* is a realization of the methodology in ETSI-ISG +NFV-TST001_. + +The *Yardstick* framework is *installer*, *infrastructure* and *application* +independent. + + +OPNFV Colorado Release +====================== + +This Colorado release provides *Yardstick* as a framework for NFVI testing +and OPNFV feature testing, automated in the OPNFV CI pipeline, including: + +* Documentation generated with Sphinx + + * User Guide + + * Code Documentation + + * Release notes (this document) + + * Results + +* Automated Yardstick test suite (daily, weekly) + + * Jenkins Jobs for OPNFV community labs + +* Automated Yardstick test results visualization + + * Dashboard_ using Grafana (user:opnfv/password: opnfv), influxDB is used as + backend + +* Yardstick framework source code + +* Yardstick test cases yaml files + +* Yardstick pliug-in configration yaml files, plug-in install/remove scripts + +For Colorado release, the *Yardstick framework* is used for the following +testing: + +* OPNFV platform testing - generic test cases to measure the categories: + + * Compute + + * Network + + * Storage + +* Test cases for the following OPNFV Projects: + + * High Availability + + * IPv6 + + * KVM + + * Parser + + * StorPerf + + * VSperf + + * virtual Traffic Classifier + +The *Yardstick framework* is developed in the OPNFV community, by the +Yardstick_ team. The *virtual Traffic Classifier* is a part of the Yardstick +Project. + +.. note:: The test case description template used for the Yardstick test cases + is based on the document ETSI-ISG NFV-TST001_; the results report template + used for the Yardstick results is based on the IEEE Std 829-2008. + + +Release Data +============ + ++--------------------------------------+--------------------------------------+ +| **Project** | Yardstick | +| | | ++--------------------------------------+--------------------------------------+ +| **Repo/tag** | yardstick/colorado.3.0 | +| | | ++--------------------------------------+--------------------------------------+ +| **Yardstick Docker image tag** | colorado.3.0 | +| | | ++--------------------------------------+--------------------------------------+ +| **Release designation** | Colorado | +| | | ++--------------------------------------+--------------------------------------+ +| **Release date** | December 5th, 2016 | +| | | ++--------------------------------------+--------------------------------------+ +| **Purpose of the delivery** | OPNFV Colorado release 3.0 | +| | | ++--------------------------------------+--------------------------------------+ + + +Deliverables +============ + +Documents +--------- + + - User Guide: http://artifacts.opnfv.org/yardstick/colorado/docs/userguide/index.html + + - Test Results: http://artifacts.opnfv.org/yardstick/colorado/docs/results/overview.html + + +Software Deliverables +--------------------- + +**Yardstick framework source code ** + ++--------------------------------------+--------------------------------------+ +| **Project** | Yardstick | +| | | ++--------------------------------------+--------------------------------------+ +| **Repo/tag** | yardstick/colorado.3.0 | +| | | ++--------------------------------------+--------------------------------------+ +| **Yardstick Docker image tag** | colorado.3.0 | +| | | ++--------------------------------------+--------------------------------------+ +| **Release designation** | Colorado | +| | | ++--------------------------------------+--------------------------------------+ +| **Release date** | December 5th, 2016 | +| | | ++--------------------------------------+--------------------------------------+ +| **Purpose of the delivery** | OPNFV Colorado release | +| | | ++--------------------------------------+--------------------------------------+ + + +**Contexts** + ++---------------------+-------------------------------------------------------+ +| **Context** | **Description** | +| | | ++---------------------+-------------------------------------------------------+ +| *Heat* | Models orchestration using OpenStack Heat | +| | | ++---------------------+-------------------------------------------------------+ +| *Node* | Models Baremetal, Controller, Compute | +| | | ++---------------------+-------------------------------------------------------+ + + +**Runners** + ++---------------------+-------------------------------------------------------+ +| **Runner** | **Description** | +| | | ++---------------------+-------------------------------------------------------+ +| *Arithmetic* | Steps every run arithmetically according to specified | +| | input value | +| | | ++---------------------+-------------------------------------------------------+ +| *Duration* | Runs for a specified period of time | +| | | ++---------------------+-------------------------------------------------------+ +| *Iteration* | Runs for a specified number of iterations | +| | | ++---------------------+-------------------------------------------------------+ +| *Sequence* | Selects input value to a scenario from an input file | +| | and runs all entries sequentially | +| | | ++---------------------+-------------------------------------------------------+ + + +**Scenarios** + ++---------------------+-------------------------------------------------------+ +| **Category** | **Delivered** | +| | | ++---------------------+-------------------------------------------------------+ +| *Availability* | Attacker: | +| | | +| | * baremetal, process | +| | | +| | HA tools: | +| | | +| | * check host, openstack, process, service | +| | * kill process | +| | * start/stop service | +| | | +| | Monitor: | +| | | +| | * command, process | +| | | ++---------------------+-------------------------------------------------------+ +| *Compute* | * cpuload | +| | | +| | * cyclictest | +| | | +| | * lmbench | +| | | +| | * lmbench_cache | +| | | +| | * perf | +| | | +| | * unixbench | +| | | +| | * ramspeed | +| | | +| | * cachestat | +| | | +| | * memeoryload | +| | | +| | * computecapacity | +| | | ++---------------------+-------------------------------------------------------+ +| *Networking* | * iperf3 | +| | | +| | * netperf | +| | | +| | * netperf_node | +| | | +| | * ping | +| | | +| | * ping6 | +| | | +| | * pktgen | +| | | +| | * sfc | +| | | +| | * sfc with tacker | +| | | +| | * vtc instantion validation | +| | | +| | * vtc instantion validation with noisy neighbors | +| | | +| | * vtc throughput | +| | | +| | * vtc throughput in the presence of noisy neighbors | +| | | +| | * networkcapacity | +| | | +| | * netutilization | +| | | ++---------------------+-------------------------------------------------------+ +| *Parser* | Tosca2Heat | +| | | ++---------------------+-------------------------------------------------------+ +| *Storage* | fio | +| | | +| | storagecapacity | +| | | ++---------------------+-------------------------------------------------------+ +| *StorPerf* | storperf | +| | | ++---------------------+-------------------------------------------------------+ + + +**API to Other Frameworks** + ++---------------------+-------------------------------------------------------+ +| **Framework** | **Description** | +| | | ++---------------------+-------------------------------------------------------+ +| *ApexLake* | Experimental framework that enables the user to | +| | validate NFVI from the perspective of a VNF. | +| | A virtual Traffic Classifier is utilized as VNF. | +| | Enables experiments with SR-IOV on Compute Node. | +| | | ++---------------------+-------------------------------------------------------+ + + +**Test Results Output** + ++-----------------------------+-----------------------------------------------+ +| **Dispatcher** | **Description** | +| | | ++-----------------------------+-----------------------------------------------+ +| file | Log to a file. | +| | | ++-----------------------------+-----------------------------------------------+ +| http | Post data to html. | +| | | ++-----------------------------+-----------------------------------------------+ +| influxdb | Post data to influxDB. | +| | | ++-----------------------------+-----------------------------------------------+ + + +Delivered Test cases +-------------------- + +* Generic NFVI test cases + + * OPNFV_YARDSTICK_TCOO1 - NW Performance + + * OPNFV_YARDSTICK_TCOO2 - NW Latency + + * OPNFV_YARDSTICK_TCOO4 - Cache Utilization + + * OPNFV_YARDSTICK_TCOO5 - Storage Performance + + * OPNFV_YARDSTICK_TCOO8 - Packet Loss Extended Test + + * OPNFV_YARDSTICK_TCOO9 - Packet Loss + + * OPNFV_YARDSTICK_TCO10 - Memory Latency + + * OPNFV_YARDSTICK_TCO11 - Packet Delay Variation Between VMs + + * OPNFV_YARDSTICK_TCO12 - Memory Bandwidth + + * OPNFV_YARDSTICK_TCO14 - Processing Speed + + * OPNFV_YARDSTICK_TCO24 - CPU Load + + * OPNFV_YARDSTICK_TCO37 - Latency, CPU Load, Throughput, Packet Loss + + * OPNFV_YARDSTICK_TCO38 - Latency, CPU Load, Throughput, Packet Loss Extended + Test + + * OPNFV_YARDSTICK_TCO42 - Network Performance + + * OPNFV_YARDSTICK_TCO43 - Network Latency Between NFVI Nodes + + * OPNFV_YARDSTICK_TCO44 - Memory Utilization + + * OPNFV_YARDSTICK_TCO55 - Compute Capacity + + * OPNFV_YARDSTICK_TCO61 - Network Utilization + + * OPNFV_YARDSTICK_TCO63 - Storage Capacity + + * OPNFV_YARDSTICK_TCO69 - Memory Bandwidth + + * OPNFV_YARDSTICK_TCO70 - Latency, Memory Utilization, Throughput, Packet + Loss + + * OPNFV_YARDSTICK_TCO71 - Latency, Cache Utilization, Throughput, Packet Loss + + * OPNFV_YARDSTICK_TCO72 - Latency, Network Utilization, Throughput, Packet + Loss + + * OPNFV_YARDSTICK_TC073 - Network Latency and Throughput Between Nodes + + * OPNFV_YARDSTICK_TCO75 - Network Capacity and Scale + +* Test Cases for OPNFV HA Project: + + * OPNFV_YARDSTICK_TCO19 - HA: Control node Openstack service down + + * OPNFV_YARDSTICK_TC025 - HA: OpenStacK Controller Node abnormally down + + * OPNFV_YARDSTICK_TCO45 - HA: Control node Openstack service down - neutron + server + + * OPNFV_YARDSTICK_TC046 - HA: Control node Openstack service down - keystone + + * OPNFV_YARDSTICK_TCO47 - HA: Control node Openstack service down - glance + api + + * OPNFV_YARDSTICK_TC048 - HA: Control node Openstack service down - cinder + api + + * OPNFV_YARDSTICK_TCO49 - HA: Control node Openstack service down - swift + proxy + + * OPNFV_YARDSTICK_TC050 - HA: OpenStack Controller Node Network High + Availability + + * OPNFV_YARDSTICK_TCO51 - HA: OpenStack Controller Node CPU Overload High + Availability + + * OPNFV_YARDSTICK_TC052 - HA: OpenStack Controller Node Disk I/O Block High + Availability + + * OPNFV_YARDSTICK_TCO53 - HA: OpenStack Controller Load Balance Service High + Availability + + * OPNFV_YARDSTICK_TC054 - HA: OpenStack Virtual IP High Availability + +* Test Case for OPNFV IPv6 Project: + + * OPNFV_YARDSTICK_TCO27 - IPv6 connectivity + +* Test Case for OPNFV KVM Project: + + * OPNFV_YARDSTICK_TCO28 - KVM Latency measurements + +* Test Case for OPNFV Parser Project: + + * OPNFV_YARDSTICK_TCO40 - Verify Parser Yang-to-Tosca + +* Test Case for OPNFV StorPerf Project: + + * OPNFV_YARDSTICK_TCO74 - Storperf + +* Test Cases for Virtual Traffic Classifier: + + * OPNFV_YARDSTICK_TC006 - Virtual Traffic Classifier Data Plane Throughput +Benchmarking Test + + * OPNFV_YARDSTICK_TC007 - Virtual Traffic Classifier Data Plane Throughput +Benchmarking in presence of noisy neighbors Test + + * OPNFV_YARDSTICK_TC020 - Virtual Traffic Classifier Instantiation Test + + * OPNFV_YARDSTICK_TC021 - Virtual Traffic Classifier Instantiation in +presence of noisy neighbors Test + + +Version Change +============== + +Module Version Changes +---------------------- + +This is the second tracked release of Yardstick. It is based on following +upstream versions: + +- ONOS Goldeneye + +- OpenStack Mitaka + +- OpenDaylight Beryllium + + +Document Version Changes +------------------------ + +This is the second tracked version of the Yardstick framework in OPNFV. +It includes the following documentation updates: + +- Yardstick User Guide: added yardstick plugin chapter; added Store Other +Project's Test Results in InfluxDB chapter; Refine yardstick instantion chapter. + +- Yardstick Code Documentation: no changes + +- Yardstick Release Notes for Yardstick: this document + +- Test Results report for Colorado testing with Yardstick: updated listed of +verified scenarios and limitations + + +Feature additions +----------------- + - Yardstick plugin + - Yardstick reporting + - StorPerf Integration + + +Scenario Matrix +=============== + +For Colorado 3.0, Yardstick was tested on the following scenarios: + ++-------------------------+---------+---------+---------+---------+ +| Scenario | Apex | Compass | Fuel | Joid | ++=========================+=========+=========+=========+=========+ +| os-nosdn-nofeature-noha | | | | X | ++-------------------------+---------+---------+---------+---------+ +| os-nosdn-nofeature-ha | X | | X | X | ++-------------------------+---------+---------+---------+---------+ +| os-odl_l2-nofeature-ha | X | X | X | X | ++-------------------------+---------+---------+---------+---------+ +| os-odl_l2-nofeature-noha| | X | | | ++-------------------------+---------+---------+---------+---------+ +| os-odl_l3-nofeature-ha | X | | X | | ++-------------------------+---------+---------+---------+---------+ +| os-odl_l3-nofeature-ha | | X | | | ++-------------------------+---------+---------+---------+---------+ +| os-onos-sfc-ha | X | | X | X | ++-------------------------+---------+---------+---------+---------+ +| os-onos-nofeature-ha | X | | X | X | ++-------------------------+---------+---------+---------+---------+ +| os-onos-nofeature-noha | | X | | | ++-------------------------+---------+---------+---------+---------+ +| os-odl_l2-sfc-ha | | | X | | ++-------------------------+---------+---------+---------+---------+ +| os-odl_l2-sfc-noha | X | X | | | ++-------------------------+---------+---------+---------+---------+ +| os-odl_l2-bgpvpn-ha | X | | X | | ++-------------------------+---------+---------+---------+---------+ +| os-odl_l2-bgpvpn-noha | | X | | | ++-------------------------+---------+---------+---------+---------+ +| os-nosdn-kvm-ha | | | X | | ++-------------------------+---------+---------+---------+---------+ +| os-nosdn-kvm-noha | | X | | | ++-------------------------+---------+---------+---------+---------+ +| os-nosdn-ovs-ha | | | X | | ++-------------------------+---------+---------+---------+---------+ +| os-nosdn-ovs-noha | X | | X | | ++-------------------------+---------+---------+---------+---------+ +| os-ocl-nofeature-ha | | | | | ++-------------------------+---------+---------+---------+---------+ +| os-nosdn-lxd-ha | | | | X | ++-------------------------+---------+---------+---------+---------+ +| os-nosdn-lxd-noha | | | | X | ++-------------------------+---------+---------+---------+---------+ +| os-odl_l2-fdio-noha | X | | | | ++-------------------------+---------+---------+---------+---------+ + + +Test results +============ + +Test results are available in: + + - jenkins logs on CI: https://build.opnfv.org/ci/view/yardstick/ + +The reporting pages can be found at: + + * apex: http://testresults.opnfv.org/reporting/yardstick/release/colorado/index-status-apex.html + * compass: http://testresults.opnfv.org/reporting/yardstick/release/colorado/index-status-compass.html + * fuel: http://testresults.opnfv.org/reporting/yardstick/release/colorado/index-status-fuel.html + * joid: http://testresults.opnfv.org/reporting/yardstick/release/colorado/index-status-joid.html + +You can get additional details through test logs on http://artifacts.opnfv.org/. +As no search engine is available on the OPNFV artifact web site you must +retrieve the pod identifier on which the tests have been executed (see +field pod in any of the results) then click on the selected POD and look +for the date of the test you are interested in. + + +Known Issues/Faults +------------ + - Floating IP not supported in bgpvpn scenario + - Floating IP not supported in apex-os-odl_l3-nofeature-ha scenario + +.. note:: The faults not related to *Yardstick* framework, addressing scenarios + which were not fully verified, are listed in the OPNFV installer's release + notes. + + +Corrected Faults +---------------- + +Colorado.3.0: + ++----------------------------+------------------------------------------------+ +| **JIRA REFERENCE** | **SLOGAN** | +| | | ++----------------------------+------------------------------------------------+ +| JIRA: YARDSTICK-239 | Define process for working with Yardstick | +| | Grafana dashboard. | +| | | ++----------------------------+------------------------------------------------+ +| JIRA: YARDSTICK-373 | Add os-odl_l2-fdio-ha scenario support. | +| | | ++----------------------------+------------------------------------------------+ + + +Colorado.2.0: + ++----------------------------+------------------------------------------------+ +| **JIRA REFERENCE** | **SLOGAN** | +| | | ++----------------------------+------------------------------------------------+ +| JIRA: YARDSTICK-325 | Provide raw format yardstick vm image for | +| | nova-lxd scenario. | +| | | ++----------------------------+------------------------------------------------+ +| JIRA: YARDSTICK-358 | tc027 ipv6 test case to de-coupling to the | +| | installers. | +| | | ++----------------------------+------------------------------------------------+ +| JIRA: YARDSTICK-359 | ipv6 testcase disable port-security on | +| | vRouter. | +| | | ++----------------------------+------------------------------------------------+ +| JIRA: YARDSTICK-363 | ipv6 testcase to support fuel. | +| | | ++----------------------------+------------------------------------------------+ +| JIRA: YARDSTICK-367 | Add d3 graph presentation to yardstick | +| | reporting. | +| | | ++----------------------------+------------------------------------------------+ +| JIRA: YARDSTICK-371 | Provide raw format yardstick vm image for | +| | nova-lxd scenario. | +| | | ++----------------------------+------------------------------------------------+ +| JIRA: YARDSTICK-372 | cannot find yardstick-img-dpdk-modify and | +| | yardstick-img-lxd-modify in environment | +| | varibales. | +| | | ++----------------------------+------------------------------------------------+ + + +Colorado 3.0 known restrictions/issues +================================== ++-----------+-----------+----------------------------------------------+ +| Installer | Scenario | Issue | ++===========+===========+==============================================+ +| any | *-bgpvpn | Floating ips not supported. Some Test cases | +| | | related to floating ips are excluded. | ++-----------+-----------+----------------------------------------------+ +| any | odl_l3-* | Some test cases related to using floating IP | +| | | addresses fail because of a known ODL bug. | +| | | https://jira.opnfv.org/browse/APEX-112 | ++-----------+-----------+----------------------------------------------+ + + +Open JIRA tickets +================= + + +Useful links +============ + + - wiki project page: https://wiki.opnfv.org/display/yardstick/Yardstick + + - wiki Yardstick Colorado release planing page: https://wiki.opnfv.org/display/yardstick/Yardstick+Colorado+Release+Planning + + - wiki Yardstick Colorado release jira page: https://wiki.opnfv.org/display/yardstick/Jira+Yardstick-Colorado + + - Yardstick repo: https://git.opnfv.org/cgit/yardstick + + - Yardstick CI dashboard: https://build.opnfv.org/ci/view/yardstick + + - Yardstick grafana dashboard: http://testresults.opnfv.org/grafana/ + + - Yardstick IRC chanel: #opnfv-yardstick + +.. _`YARDSTICK-239` : https://jira.opnfv.org/browse/YARDSTICK-239 + +.. _`YARDSTICK-325` : https://jira.opnfv.org/browse/YARDSTICK-325 + +.. _`YARDSTICK-358` : https://jira.opnfv.org/browse/YARDSTICK-358 + +.. _`YARDSTICK-359` : https://jira.opnfv.org/browse/YARDSTICK-359 + +.. _`YARDSTICK-363` : https://jira.opnfv.org/browse/YARDSTICK-363 + +.. _`YARDSTICK-367` : https://jira.opnfv.org/browse/YARDSTICK-367 + +.. _`YARDSTICK-371` : https://jira.opnfv.org/browse/YARDSTICK-371 + +.. _`YARDSTICK-372` : https://jira.opnfv.org/browse/YARDSTICK-372 + +.. _`YARDSTICK-373` : https://jira.opnfv.org/browse/YARDSTICK-373 diff --git a/docs/release/results/index.rst b/docs/release/results/index.rst new file mode 100644 index 000000000..2b67f1b22 --- /dev/null +++ b/docs/release/results/index.rst @@ -0,0 +1,14 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Ericsson AB and others. + +====================== +Yardstick test results +====================== + +.. toctree:: + :maxdepth: 4 + +.. include:: ./overview.rst +.. include:: ./results.rst diff --git a/docs/release/results/os-nosdn-kvm-ha.rst b/docs/release/results/os-nosdn-kvm-ha.rst new file mode 100644 index 000000000..a8a56f80e --- /dev/null +++ b/docs/release/results/os-nosdn-kvm-ha.rst @@ -0,0 +1,270 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 + + +================================ +Test Results for os-nosdn-kvm-ha +================================ + +.. toctree:: + :maxdepth: 2 + + +fuel +==== + +.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main +.. _POD2: https://wiki.opnfv.org/pharos?&#community_test_labs + +Overview of test results +------------------------ + +See Grafana_ for viewing test result metrics for each respective test case. It +is possible to chose which specific scenarios to look at, and then to zoom in +on the details of each run test scenario as well. + +All of the test case results below are based on 4 scenario test +runs, each run on the Ericsson POD2_ or LF POD2_ between August 24 and 30 in +2016. + +TC002 +----- +The round-trip-time (RTT) between 2 VMs on different blades is measured using +ping. Most test run measurements result on average between 0.44 and 0.75 ms. +A few runs start with a 0.65 - 0.68 ms RTT spike (This could be because of +normal ARP handling). One test run has a greater RTT spike of 1.49 ms. +To be able to draw conclusions more runs should be made. SLA set to 10 ms. +The SLA value is used as a reference, it has not been defined by OPNFV. + +TC005 +----- +The IO read bandwidth looks similar between different dates, with an +average between approx. 92 and 204 MB/s. Within each test run the results +vary, with a minimum 2 MB/s and maximum 819 MB/s on the totality. Most runs +have a minimum BW of 3 MB/s (one run at 2 MB/s). The maximum BW varies more in +absolute numbers between the dates, between 238 and 819 MB/s. +SLA set to 400 MB/s. The SLA value is used as a reference, it has not been +defined by OPNFV. + +TC010 +----- +The measurements for memory latency are similar between test dates and result +in approx. 2.07 ns. The variations within each test run are similar, between +1.41 and 3.53 ns. +SLA set to 30 ns. The SLA value is used as a reference, it has not been defined +by OPNFV. + +TC011 +----- +Packet delay variation between 2 VMs on different blades is measured using +Iperf3. The reported packet delay variation varies between 0.0051 and 0.0243 ms, +with an average delay variation between 0.0081 ms and 0.0195 ms. + +TC012 +----- +Between test dates, the average measurements for memory bandwidth result in +approx. 13.6 GB/s. Within each test run the results vary more, with a minimal +BW of 6.09 GB/s and maximum of 16.47 GB/s on the totality. +SLA set to 15 GB/s. The SLA value is used as a reference, it has not been +defined by OPNFV. + +TC014 +----- +The Unixbench processor test run results vary between scores 2316 and 3619, +one result each date. +No SLA set. + +TC037 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The RTT results are similar throughout the different test dates and runs at +approx. 15 ms. Some test runs show an increase with many flows, in the range +towards 16 to 17 ms. One exception standing out is Feb. 15 where the average +RTT is stable at approx. 13 ms. The PPS results are not as consistent as the +RTT results. +In some test runs when running with less than approx. 10000 flows the PPS +throughput is normally flatter compared to when running with more flows, after +which the PPS throughput decreases. Around 20 percent decrease in the worst +case. For the other test runs there is however no significant change to the PPS +throughput when the number of flows are increased. In some test runs the PPS +is also greater with 1000000 flows compared to other test runs where the PPS +result is less with only 2 flows. + +The average PPS throughput in the different runs varies between 414000 and +452000 PPS. The total amount of packets in each test run is approx. 7500000 to +8200000 packets. One test run Feb. 15 sticks out with a PPS average of +558000 and approx. 1100000 packets in total (same as the on mentioned earlier +for RTT results). + +There are lost packets reported in most of the test runs. There is no observed +correlation between the amount of flows and the amount of lost packets. +The lost amount of packets normally range between 100 and 1000 per test run, +but there are spikes in the range of 10000 lost packets as well, and even +more in a rare cases. + +CPU utilization statistics are collected during UDP flows sent between the VMs +using pktgen as packet generator tool. The average measurements for CPU +utilization ratio vary between 1% to 2%. The peak of CPU utilization ratio +appears around 7%. + +TC069 +----- +Between test dates, the average measurements for memory bandwidth vary between +22.6 and 29.1 GB/s. Within each test run the results vary more, with a minimal +BW of 20.0 GB/s and maximum of 29.5 GB/s on the totality. +SLA set to 6 GB/s. The SLA value is used as a reference, it has not been +defined by OPNFV. + + +TC070 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The RTT results are similar throughout the different test dates and runs at +approx. 15 ms. Some test runs show an increase with many flows, in the range +towards 16 to 17 ms. One exception standing out is Feb. 15 where the average +RTT is stable at approx. 13 ms. The PPS results are not as consistent as the +RTT results. +In some test runs when running with less than approx. 10000 flows the PPS +throughput is normally flatter compared to when running with more flows, after +which the PPS throughput decreases. Around 20 percent decrease in the worst +case. For the other test runs there is however no significant change to the PPS +throughput when the number of flows are increased. In some test runs the PPS +is also greater with 1000000 flows compared to other test runs where the PPS +result is less with only 2 flows. + +The average PPS throughput in the different runs varies between 414000 and +452000 PPS. The total amount of packets in each test run is approx. 7500000 to +8200000 packets. One test run Feb. 15 sticks out with a PPS average of +558000 and approx. 1100000 packets in total (same as the on mentioned earlier +for RTT results). + +There are lost packets reported in most of the test runs. There is no observed +correlation between the amount of flows and the amount of lost packets. +The lost amount of packets normally range between 100 and 1000 per test run, +but there are spikes in the range of 10000 lost packets as well, and even +more in a rare cases. + +Memory utilization statistics are collected during UDP flows sent between the +VMs using pktgen as packet generator tool. The average measurements for memory +utilization vary between 225MB to 246MB. The peak of memory utilization appears +around 340MB. + +TC071 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The RTT results are similar throughout the different test dates and runs at +approx. 15 ms. Some test runs show an increase with many flows, in the range +towards 16 to 17 ms. One exception standing out is Feb. 15 where the average +RTT is stable at approx. 13 ms. The PPS results are not as consistent as the +RTT results. +In some test runs when running with less than approx. 10000 flows the PPS +throughput is normally flatter compared to when running with more flows, after +which the PPS throughput decreases. Around 20 percent decrease in the worst +case. For the other test runs there is however no significant change to the PPS +throughput when the number of flows are increased. In some test runs the PPS +is also greater with 1000000 flows compared to other test runs where the PPS +result is less with only 2 flows. + +The average PPS throughput in the different runs varies between 414000 and +452000 PPS. The total amount of packets in each test run is approx. 7500000 to +8200000 packets. One test run Feb. 15 sticks out with a PPS average of +558000 and approx. 1100000 packets in total (same as the on mentioned earlier +for RTT results). + +There are lost packets reported in most of the test runs. There is no observed +correlation between the amount of flows and the amount of lost packets. +The lost amount of packets normally range between 100 and 1000 per test run, +but there are spikes in the range of 10000 lost packets as well, and even +more in a rare cases. + +Cache utilization statistics are collected during UDP flows sent between the +VMs using pktgen as packet generator tool. The average measurements for cache +utilization vary between 205MB to 212MB. + +TC072 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The RTT results are similar throughout the different test dates and runs at +approx. 15 ms. Some test runs show an increase with many flows, in the range +towards 16 to 17 ms. One exception standing out is Feb. 15 where the average +RTT is stable at approx. 13 ms. The PPS results are not as consistent as the +RTT results. +In some test runs when running with less than approx. 10000 flows the PPS +throughput is normally flatter compared to when running with more flows, after +which the PPS throughput decreases. Around 20 percent decrease in the worst +case. For the other test runs there is however no significant change to the PPS +throughput when the number of flows are increased. In some test runs the PPS +is also greater with 1000000 flows compared to other test runs where the PPS +result is less with only 2 flows. + +The average PPS throughput in the different runs varies between 414000 and +452000 PPS. The total amount of packets in each test run is approx. 7500000 to +8200000 packets. One test run Feb. 15 sticks out with a PPS average of +558000 and approx. 1100000 packets in total (same as the on mentioned earlier +for RTT results). + +There are lost packets reported in most of the test runs. There is no observed +correlation between the amount of flows and the amount of lost packets. +The lost amount of packets normally range between 100 and 1000 per test run, +but there are spikes in the range of 10000 lost packets as well, and even +more in a rare cases. + +Network utilization statistics are collected during UDP flows sent between the +VMs using pktgen as packet generator tool. Total number of packets received per +second was average on 200 kpps and total number of packets transmitted per +second was average on 600 kpps. + +Detailed test results +--------------------- +The scenario was run on Ericsson POD2_ and LF POD2_ with: +Fuel 9.0 +OpenStack Mitaka +OpenVirtualSwitch 2.5.90 +OpenDayLight Beryllium + +Rationale for decisions +----------------------- +Pass + +Tests were successfully executed and metrics collected. +No SLA was verified. To be decided on in next release of OPNFV. + +Conclusions and recommendations +------------------------------- +The pktgen test configuration has a relatively large base effect on RTT in +TC037 compared to TC002, where there is no background load at all. Approx. +15 ms compared to approx. 0.5 ms, which is more than a 3000 percentage +difference in RTT results. +Especially RTT and throughput come out with better results than for instance +the *fuel-os-nosdn-nofeature-ha* scenario does. The reason for this should +probably be further analyzed and understood. Also of interest could be +to make further analyzes to find patterns and reasons for lost traffic. +Also of interest could be to see if there are continuous variations where +some test cases stand out with better or worse results than the general test +case. + diff --git a/docs/release/results/os-nosdn-nofeature-ha.rst b/docs/release/results/os-nosdn-nofeature-ha.rst new file mode 100644 index 000000000..9e52731d5 --- /dev/null +++ b/docs/release/results/os-nosdn-nofeature-ha.rst @@ -0,0 +1,492 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 + + +====================================== +Test Results for os-nosdn-nofeature-ha +====================================== + +.. toctree:: + :maxdepth: 2 + + +apex +==== + +.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main +.. _POD1: https://wiki.opnfv.org/pharos?&#community_test_labs + + +Overview of test results +------------------------ + +See Grafana_ for viewing test result metrics for each respective test case. It +is possible to chose which specific scenarios to look at, and then to zoom in +on the details of each run test scenario as well. + +All of the test case results below are based on 4 scenario test +runs, each run on the LF POD1_ between August 25 and 28 in +2016. + +TC002 +----- +The round-trip-time (RTT) between 2 VMs on different blades is measured using +ping. Most test run measurements result on average between 0.74 and 1.08 ms. +A few runs start with a 0.99 - 1.07 ms RTT spike (This could be because of +normal ARP handling). One test run has a greater RTT spike of 1.35 ms. +To be able to draw conclusions more runs should be made. SLA set to 10 ms. +The SLA value is used as a reference, it has not been defined by OPNFV. + +TC005 +----- +The IO read bandwidth looks similar between different dates, with an +average between approx. 128 and 136 MB/s. Within each test run the results +vary, with a minimum 5 MB/s and maximum 446 MB/s on the totality. Most runs +have a minimum BW of 5 MB/s (one run at 6 MB/s). The maximum BW varies more in +absolute numbers between the dates, between 416 and 446 MB/s. +SLA set to 400 MB/s. The SLA value is used as a reference, it has not been +defined by OPNFV. + +TC010 +----- +The measurements for memory latency are similar between test dates and result +in approx. 1.09 ns. The variations within each test run are similar, between +1.0860 and 1.0880 ns. +SLA set to 30 ns. The SLA value is used as a reference, it has not been defined +by OPNFV. + +TC011 +----- +Packet delay variation between 2 VMs on different blades is measured using +Iperf3. The reported packet delay variation varies between 0.0025 and 0.0148 ms, +with an average delay variation between 0.0056 ms and 0.0157 ms. + +TC012 +----- +Between test dates, the average measurements for memory bandwidth result in +approx. 19.70 GB/s. Within each test run the results vary more, with a minimal +BW of 18.16 GB/s and maximum of 20.13 GB/s on the totality. +SLA set to 15 GB/s. The SLA value is used as a reference, it has not been +defined by OPNFV. + +TC014 +----- +The Unixbench processor test run results vary between scores 3224.4 and 3842.8, +one result each date. The average score on the total is 3659.5. +No SLA set. + +TC037 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The RTT results are similar throughout the different test dates and runs at +approx. 15 ms. Some test runs show an increase with many flows, in the range +towards 16 to 17 ms. One exception standing out is Feb. 15 where the average +RTT is stable at approx. 13 ms. The PPS results are not as consistent as the +RTT results. +In some test runs when running with less than approx. 10000 flows the PPS +throughput is normally flatter compared to when running with more flows, after +which the PPS throughput decreases. Around 20 percent decrease in the worst +case. For the other test runs there is however no significant change to the PPS +throughput when the number of flows are increased. In some test runs the PPS +is also greater with 1000000 flows compared to other test runs where the PPS +result is less with only 2 flows. + +The average PPS throughput in the different runs varies between 414000 and +452000 PPS. The total amount of packets in each test run is approx. 7500000 to +8200000 packets. One test run Feb. 15 sticks out with a PPS average of +558000 and approx. 1100000 packets in total (same as the on mentioned earlier +for RTT results). + +There are lost packets reported in most of the test runs. There is no observed +correlation between the amount of flows and the amount of lost packets. +The lost amount of packets normally range between 100 and 1000 per test run, +but there are spikes in the range of 10000 lost packets as well, and even +more in a rare cases. + +CPU utilization statistics are collected during UDP flows sent between the VMs +using pktgen as packet generator tool. The average measurements for CPU +utilization ratio vary between 1% to 2%. The peak of CPU utilization ratio +appears around 7%. + +TC069 +----- +Between test dates, the average measurements for memory bandwidth vary between +22.6 and 29.1 GB/s. Within each test run the results vary more, with a minimal +BW of 20.0 GB/s and maximum of 29.5 GB/s on the totality. +SLA set to 6 GB/s. The SLA value is used as a reference, it has not been +defined by OPNFV. + +TC070 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The RTT results are similar throughout the different test dates and runs at +approx. 15 ms. Some test runs show an increase with many flows, in the range +towards 16 to 17 ms. One exception standing out is Feb. 15 where the average +RTT is stable at approx. 13 ms. The PPS results are not as consistent as the +RTT results. +In some test runs when running with less than approx. 10000 flows the PPS +throughput is normally flatter compared to when running with more flows, after +which the PPS throughput decreases. Around 20 percent decrease in the worst +case. For the other test runs there is however no significant change to the PPS +throughput when the number of flows are increased. In some test runs the PPS +is also greater with 1000000 flows compared to other test runs where the PPS +result is less with only 2 flows. + +The average PPS throughput in the different runs varies between 414000 and +452000 PPS. The total amount of packets in each test run is approx. 7500000 to +8200000 packets. One test run Feb. 15 sticks out with a PPS average of +558000 and approx. 1100000 packets in total (same as the on mentioned earlier +for RTT results). + +There are lost packets reported in most of the test runs. There is no observed +correlation between the amount of flows and the amount of lost packets. +The lost amount of packets normally range between 100 and 1000 per test run, +but there are spikes in the range of 10000 lost packets as well, and even +more in a rare cases. + +Memory utilization statistics are collected during UDP flows sent between the +VMs using pktgen as packet generator tool. The average measurements for memory +utilization vary between 225MB to 246MB. The peak of memory utilization appears +around 340MB. + +TC071 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The RTT results are similar throughout the different test dates and runs at +approx. 15 ms. Some test runs show an increase with many flows, in the range +towards 16 to 17 ms. One exception standing out is Feb. 15 where the average +RTT is stable at approx. 13 ms. The PPS results are not as consistent as the +RTT results. +In some test runs when running with less than approx. 10000 flows the PPS +throughput is normally flatter compared to when running with more flows, after +which the PPS throughput decreases. Around 20 percent decrease in the worst +case. For the other test runs there is however no significant change to the PPS +throughput when the number of flows are increased. In some test runs the PPS +is also greater with 1000000 flows compared to other test runs where the PPS +result is less with only 2 flows. + +The average PPS throughput in the different runs varies between 414000 and +452000 PPS. The total amount of packets in each test run is approx. 7500000 to +8200000 packets. One test run Feb. 15 sticks out with a PPS average of +558000 and approx. 1100000 packets in total (same as the on mentioned earlier +for RTT results). + +There are lost packets reported in most of the test runs. There is no observed +correlation between the amount of flows and the amount of lost packets. +The lost amount of packets normally range between 100 and 1000 per test run, +but there are spikes in the range of 10000 lost packets as well, and even +more in a rare cases. + +Cache utilization statistics are collected during UDP flows sent between the +VMs using pktgen as packet generator tool. The average measurements for cache +utilization vary between 205MB to 212MB. + +TC072 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The RTT results are similar throughout the different test dates and runs at +approx. 15 ms. Some test runs show an increase with many flows, in the range +towards 16 to 17 ms. One exception standing out is Feb. 15 where the average +RTT is stable at approx. 13 ms. The PPS results are not as consistent as the +RTT results. +In some test runs when running with less than approx. 10000 flows the PPS +throughput is normally flatter compared to when running with more flows, after +which the PPS throughput decreases. Around 20 percent decrease in the worst +case. For the other test runs there is however no significant change to the PPS +throughput when the number of flows are increased. In some test runs the PPS +is also greater with 1000000 flows compared to other test runs where the PPS +result is less with only 2 flows. + +The average PPS throughput in the different runs varies between 414000 and +452000 PPS. The total amount of packets in each test run is approx. 7500000 to +8200000 packets. One test run Feb. 15 sticks out with a PPS average of +558000 and approx. 1100000 packets in total (same as the on mentioned earlier +for RTT results). + +There are lost packets reported in most of the test runs. There is no observed +correlation between the amount of flows and the amount of lost packets. +The lost amount of packets normally range between 100 and 1000 per test run, +but there are spikes in the range of 10000 lost packets as well, and even +more in a rare cases. + +Network utilization statistics are collected during UDP flows sent between the +VMs using pktgen as packet generator tool. Total number of packets received per +second was average on 200 kpps and total number of packets transmitted per +second was average on 600 kpps. + +Detailed test results +--------------------- +The scenario was run on LF POD1_ with: +Apex +OpenStack Mitaka +OpenVirtualSwitch 2.5.90 +OpenDayLight Beryllium + +Rationale for decisions +----------------------- +Pass + +Tests were successfully executed and metrics collected. +No SLA was verified. To be decided on in next release of OPNFV. + + +Joid +==== + +.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main +.. _POD5: https://wiki.opnfv.org/pharos?&#community_test_labs + + +Overview of test results +------------------------ + +See Grafana_ for viewing test result metrics for each respective test case. It +is possible to chose which specific scenarios to look at, and then to zoom in +on the details of each run test scenario as well. + +All of the test case results below are based on 4 scenario test runs, each run +on the Intel POD5_ between September 11 and 14 in 2016. + +TC002 +----- +The round-trip-time (RTT) between 2 VMs on different blades is measured using +ping. Most test run measurements result on average between 1.59 and 1.70 ms. +Two test runs have reached the same greater RTT spike of 3.06 ms, which are +1.66 and 1.70 ms average, but only one has the lower RTT of 1.35 ms. The other +two runs have no similar spike at all. To be able to draw conclusions more runs +should be made. SLA set to be 10 ms. The SLA value is used as a reference, it +has not been defined by OPNFV. + +TC005 +----- +The IO read bandwidth actually refers to the storage throughput and the +greatest IO read bandwidth of the four runs is 173.3 MB/s. The IO read +bandwidth of the four runs looks similar on different four days, with an +average between 32.7 and 60.4 MB/s. One of the runs has a minimum BW of 429 +KM/s and other has a maximum BW of 173.3 MB/s. The SLA of read bandwidth sets +to be 400 MB/s, which is used as a reference, and it has not been defined by +OPNFV. + +TC010 +----- +The tool we use to measure memory read latency is lmbench, which is a series of +micro benchmarks intended to measure basic operating system and hardware system +metrics. The memory read latency of the four runs is 1.1 ns on average. The +variations within each test run are different, some vary from a large range and +others have a small change. For example, the largest change is on September 14, +the memory read latency of which is ranging from 1.12 ns to 1.22 ns. However, +the results on September 12 change very little, which range from 1.14 ns to +1.17 ns. The SLA sets to be 30 ns. The SLA value is used as a reference, it has +not been defined by OPNFV. + +TC011 +----- +Iperf3 is a tool for evaluating the pocket delay variation between 2 VMs on +different blades. The reported pocket delay variations of the four test runs +differ from each other. The results on September 13 within the date look +similar and the values are between 0.0087 and 0.0190 ms, which is 0.0126 ms on +average. However, on the fourth day, the pocket delay variation has a large +wide change within the date, which ranges from 0.0032 ms to 0.0121 ms and has +the minimum average value. The pocket delay variations of other two test runs +look relatively similar, which are 0.0076 ms and 0.0152 ms on average. The SLA +value sets to be 10 ms. The SLA value is used as a reference, it has not been +defined by OPNFV. + +TC012 +----- +Lmbench is also used to measure the memory read and write bandwidth, in which +we use bw_mem to obtain the results. Among the four test runs, the memory +bandwidth within the second day almost keep stable, which is 11.58 GB/s on +average. And the memory bandwidth of the fourth day look similar as that of the +second day, both of which remain stable. The other two test runs relatively +change from a large wide range, in which the minimum memory bandwidth is 11.22 +GB/s and the maximum bandwidth is 16.65 GB/s with an average bandwidth of about +12.20 GB/s. Here SLA set to be 15 GB/s. The SLA value is used as a reference, +it has not been defined by OPNFV. + +TC014 +----- +The Unixbench is used to measure processing speed, that is instructions per +second. It can be seen from the dashboard that the processing test results +vary from scores 3272 to 3444, and there is only one result one date. The +overall average score is 3371. No SLA set. + +TC037 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The mean packet throughput of the four test runs is 119.85, 128.02, 121.40 and +126.08 kpps, of which the result of the second is the highest. The RTT results +of all the test runs keep flat at approx. 37 ms. It is obvious that the PPS +results are not as consistent as the RTT results. + +The No. flows of the four test runs are 240 k on average and the PPS results +look a little waved since the largest packet throughput is 184 kpps and the +minimum throughput is 49 K respectively. + +There are no errors of packets received in the four runs, but there are still +lost packets in all the test runs. The RTT values obtained by ping of the four +runs have the similar average vaue, that is 38 ms, of which the worest RTT is +93 ms on Sep. 14th. + +CPU load of the four test runs have a large change, since the minimum value and +the peak of CPU load is 0 percent and 51 percent respectively. And the best +result is obtained on Sep. 14th. + +TC069 +----- +With the block size changing from 1 kb to 512 kb, the memory write bandwidth +tends to become larger first and then smaller within every run test, which +rangs from 22.3 GB/s to 26.8 GB/s and then to 18.5 GB/s on average. Since the +test id is one, it is that only the INT memory write bandwidth is tested. On +the whole, when the block size is 8 kb and 16 kb, the memory write bandwidth +look similar with a minimal BW of 22.5 GB/s and peak value of 28.7 GB/s. SLA +sets to be 7 GB/s. The SLA value is used as a a reference, it has not been +defined by OPNFV. + +TC070 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The network latency is measured by ping, and the results of the four test runs +look similar with each other. Within each test run, the maximum RTT can reach +more than 80 ms and the average RTT is usually approx. 38 ms. On the whole, the +average RTTs of the four runs keep flat. + +Memory utilization is measured by free, which can display amount of free and +used memory in the system. The largest amount of used memory is 268 MiB on Sep +14, which also has the largest minimum memory. Besides, the rest three test +runs have the similar used memory. On the other hand, the free memory of the +four runs have the same smallest minimum value, that is about 223 MiB, and the +maximum free memory of three runs have the similar result, that is 337 MiB, +except that on Sep. 14th, whose maximum free memory is 254 MiB. On the whole, +all the test runs have similar average free memory. + +Network throughput and packet loss can be measured by pktgen, which is a tool +in the network for generating traffic loads for network experiments. The mean +network throughput of the four test runs seem quite different, ranging from +119.85 kpps to 128.02 kpps. The average number of flows in these tests is +24000, and each run has a minimum number of flows of 2 and a maximum number +of flows of 1.001 Mil. At the same time, the corresponding packet throughput +differ between 49.4k and 193.3k with an average packet throughput of approx. +125k. On the whole, the PPS results seem consistent. Within each test run of +the four runs, when number of flows becomes larger, the packet throughput seems +not larger in the meantime. + +TC071 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The network latency is measured by ping, and the results of the four test runs +look similar with each other. Within each test run, the maximum RTT can reach +more than 94 ms and the average RTT is usually approx. 35 ms. On the whole, the +average RTTs of the four runs keep flat. + +Cache utilization is measured by cachestat, which can display size of cache and +buffer in the system. Cache utilization statistics are collected during UDP +flows sent between the VMs using pktgen as packet generator tool.The largest +cache size is 212 MiB in the four runs, and the smallest cache size is 75 MiB. +On the whole, the average cache size of the four runs is approx. 208 MiB. +Meanwhile, the tread of the buffer size looks similar with each other. + +Packet throughput can be measured by pktgen, which is a tool in the network for +generating traffic loads for network experiments. The mean packet throughput of +the four test runs seem quite different, ranging from 119.85 kpps to 128.02 +kpps. The average number of flows in these tests is 239.7k, and each run has a +minimum number of flows of 2 and a maximum number of flows of 1.001 Mil. At the +same time, the corresponding packet throughput differ between 49.4k and 193.3k +with an average packet throughput of approx. 125k. On the whole, the PPS results +seem consistent. Within each test run of the four runs, when number of flows +becomes larger, the packet throughput seems not larger in the meantime. + +TC072 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The RTT results are similar throughout the different test dates and runs at +approx. 32 ms. The PPS results are not as consistent as the RTT results. + +Network utilization is measured by sar, that is system activity reporter, which +can display the average statistics for the time since the system was started. +Network utilization statistics are collected during UDP flows sent between the +VMs using pktgen as packet generator tool. The largest total number of packets +transmitted per second differs from each other, in which the smallest number of +packets transmitted per second is 6 pps on Sep. 12ed and the largest of that is +210.8 kpps. Meanwhile, the largest total number of packets received per second +differs from each other, in which the smallest number of packets received per +second is 2 pps on Sep. 13rd and the largest of that is 250.2 kpps. + +In some test runs when running with less than approx. 90000 flows the PPS +throughput is normally flatter compared to when running with more flows, after +which the PPS throughput decreases. For the other test runs there is however no +significant change to the PPS throughput when the number of flows are +increased. In some test runs the PPS is also greater with 1000000 flows +compared to other test runs where the PPS result is less with only 2 flows. + +There are lost packets reported in most of the test runs. There is no observed +correlation between the amount of flows and the amount of lost packets. +The lost amount of packets normally differs a lot per test run. + +Detailed test results +--------------------- +The scenario was run on Intel POD5_ with: +Joid +OpenStack Mitaka +OpenVirtualSwitch 2.5.90 +OpenDayLight Beryllium + +Rationale for decisions +----------------------- +Pass + +Conclusions and recommendations +------------------------------- +Tests were successfully executed and metrics collected. +No SLA was verified. To be decided on in next release of OPNFV. + + diff --git a/docs/release/results/os-nosdn-nofeature-noha.rst b/docs/release/results/os-nosdn-nofeature-noha.rst new file mode 100644 index 000000000..8b7c184bb --- /dev/null +++ b/docs/release/results/os-nosdn-nofeature-noha.rst @@ -0,0 +1,259 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 + + +======================================== +Test Results for os-nosdn-nofeature-noha +======================================== + +.. toctree:: + :maxdepth: 2 + + +Joid +===== + +.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main +.. _POD5: https://wiki.opnfv.org/pharos?&#community_test_labs + +Overview of test results +------------------------ + +See Grafana_ for viewing test result metrics for each respective test case. It +is possible to chose which specific scenarios to look at, and then to zoom in +on the details of each run test scenario as well. + +All of the test case results below are based on 4 scenario test runs, each run +on the Intel POD5_ between September 12 and 15 in 2016. + +TC002 +----- +The round-trip-time (RTT) between 2 VMs on different blades is measured using +ping. Most test run measurements result on average between 1.50 and 1.68 ms. +Only one test run has reached greatest RTT spike of 2.92 ms, which has +the smallest RTT of 1.06 ms. The other three runs have no similar spike at all, +the minimum and average RTTs of which are approx. 1.50 ms and 1.68 ms. SLA set to +be 10 ms. The SLA value is used as a reference, it has not been defined by +OPNFV. + +TC005 +----- +The IO read bandwidth actually refers to the storage throughput, which is +measured by fio and the greatest IO read bandwidth of the four runs is 177.5 +MB/s. The IO read bandwidth of the four runs looks similar on different four +days, with an average between 46.7 and 62.5 MB/s. One of the runs has a minimum +BW of 680 KM/s and other has a maximum BW of 177.5 MB/s. The SLA of read +bandwidth sets to be 400 MB/s, which is used as a reference, and it has not +been defined by OPNFV. + +The results of storage IOPS for the four runs look similar with each other. The +test runs all have an approx. 1.55 K/s for IO reading with an minimum value of +less than 60 times per second. + +TC010 +----- +The tool we use to measure memory read latency is lmbench, which is a series of +micro benchmarks intended to measure basic operating system and hardware system +metrics. The memory read latency of the four runs is between 1.134 ns and 1.227 +ns on average. The variations within each test run are quite different, some +vary from a large range and others have a small change. For example, the +largest change is on September 15, the memory read latency of which is ranging +from 1.116 ns to 1.393 ns. However, the results on September 12 change very +little, which mainly keep flat and range from 1.124 ns to 1.55 ns. The SLA sets +to be 30 ns. The SLA value is used as a reference, it has not been defined by +OPNFV. + +TC011 +----- +Iperf3 is a tool for evaluating the pocket delay variation between 2 VMs on +different blades. The reported pocket delay variations of the four test runs +differ from each other. The results on September 13 within the date look +similar and the values are between 0.0213 and 0.0225 ms, which is 0.0217 ms on +average. However, on the third day, the packet delay variation has a large +wide change within the date, which ranges from 0.008 ms to 0.0225 ms and has +the minimum value. On Sep. 12, the packet delay is quite long, for the value is +between 0.0236 and 0.0287 ms and it also has the maximum packet delay of 0.0287 +ms. The packet delay of the last test run is 0.0151 ms on average. The SLA +value sets to be 10 ms. The SLA value is used as a reference, it has not been +defined by OPNFV. + +TC012 +----- +Lmbench is also used to measure the memory read and write bandwidth, in which +we use bw_mem to obtain the results. Among the four test runs, the memory +bandwidth of three test runs almost keep stable within each run, which is +11.65, 11.57 and 11.64 GB/s on average. However, the memory read and write +bandwidth on Sep. 14 has a large range, for it ranges from 11.36 GB/s to 16.68 +GB/s. Here SLA set to be 15 GB/s. The SLA value is used as a reference, it has +not been defined by OPNFV. + +TC014 +----- +The Unixbench is used to evaluate the IaaS processing speed with regards to +score of single cpu running and parallel running. It can be seen from the +dashboard that the processing test results vary from scores 3222 to 3585, and +there is only one result one date. No SLA set. + +TC037 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The mean packet throughput of the four test runs is 124.8, 160.1, 113.8 and +137.3 kpps, of which the result of the second is the highest. The RTT results +of all the test runs keep flat at approx. 37 ms. It is obvious that the PPS +results are not as consistent as the RTT results. + +The No. flows of the four test runs are 240 k on average and the PPS results +look a little waved since the largest packet throughput is 243.1 kpps and the +minimum throughput is 37.6 kpps respectively. + +There are no errors of packets received in the four runs, but there are still +lost packets in all the test runs. The RTT values obtained by ping of the four +runs have the similar average vaue, that is between 32 ms and 41 ms, of which +the worest RTT is 155 ms on Sep. 14th. + +CPU load is measured by mpstat, and CPU load of the four test runs seem a +little similar, since the minimum value and the peak of CPU load is between 0 +percent and 9 percent respectively. And the best result is obtained on Sep. +15th, with an CPU load of nine percent. + +TC069 +----- +With the block size changing from 1 kb to 512 kb, the memory write bandwidth +tends to become larger first and then smaller within every run test, which +rangs from 22.4 GB/s to 26.5 GB/s and then to 18.6 GB/s on average. Since the +test id is one, it is that only the INT memory write bandwidth is tested. On +the whole, when the block size is 8 kb and 16 kb, the memory write bandwidth +look similar with a minimal BW of 22.5 GB/s and peak value of 28.7 GB/s. And +then with the block size becoming larger, the memory write bandwidth tends to +decrease. SLA sets to be 7 GB/s. The SLA value is used as a a reference, it has +not been defined by OPNFV. + +TC070 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The network latency is measured by ping, and the results of three test runs look +similar with each other, and Within these test runs, the maximum RTT can reach +95 ms and the average RTT is usually approx. 36 ms. The network latency tested +on Sep. 14 shows that it has a peak latency of 155 ms. But on the whole, the +average RTTs of the four runs keep flat. + +Memory utilization is measured by free, which can display amount of free and +used memory in the system. The largest amount of used memory is 270 MiB on Sep +13, which also has the smallest minimum memory utilization. Besides, the rest +three test runs have the similar used memory with an average memory usage of +264 MiB. On the other hand, the free memory of the four runs have the same +smallest minimum value, that is about 223 MiB, and the maximum free memory of +three runs have the similar result, that is 226 MiB, except that on Sep. 13th, +whose maximum free memory is 273 MiB. On the whole, all the test runs have +similar average free memory. + +Network throughput and packet loss can be measured by pktgen, which is a tool +in the network for generating traffic loads for network experiments. The mean +network throughput of the four test runs seem quite different, ranging from +119.85 kpps to 128.02 kpps. The average number of flows in these tests is +240000, and each run has a minimum number of flows of 2 and a maximum number +of flows of 1.001 Mil. At the same time, the corresponding packet throughput +differ between 38k and 243k with an average packet throughput of approx. 134k. +On the whole, the PPS results seem consistent. Within each test run of the four +runs, when number of flows becomes larger, the packet throughput seems not +larger in the meantime. + +TC071 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The network latency is measured by ping, and the results of the four test runs +look similar with each other. Within each test run, the maximum RTT can reach +79 ms and the average RTT is usually approx. 35 ms. On the whole, the average +RTTs of the four runs keep flat. + +Cache utilization is measured by cachestat, which can display size of cache and +buffer in the system. Cache utilization statistics are collected during UDP +flows sent between the VMs using pktgen as packet generator tool.The largest +cache size is 214 MiB in the four runs, and the smallest cache size is 100 MiB. +On the whole, the average cache size of the four runs is approx. 210 MiB. +Meanwhile, the tread of the buffer size looks similar with each other. On the +other hand, the mean buffer size of the four runs keep flat, since they have a +minimum value of approx. 7 MiB and a maximum value of 8 MiB, with an average +value of about 8 MiB. + +Packet throughput can be measured by pktgen, which is a tool in the network for +generating traffic loads for network experiments. The mean packet throughput of +the four test runs seem quite different, ranging from 113.8 kpps to 124.8 kpps. +The average number of flows in these tests is 240k, and each run has a minimum +number of flows of 2 and a maximum number of flows of 1.001 Mil. At the same +time, the corresponding packet throughput differ between 47.6k and 243.1k with +an average packet throughput between 113.8k and 160.1k. Within each test run of +the four runs, when number of flows becomes larger, the packet throughput seems +not larger in the meantime. + +TC072 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The RTT results are similar throughout the different test dates and runs +between 0 ms and 79 ms with an average leatency of approx. 35 ms. The PPS +results are not as consistent as the RTT results, for the mean packet +throughput of the four runs differ from 113.8 kpps to 124.8 kpps. + +Network utilization is measured by sar, that is system activity reporter, which +can display the average statistics for the time since the system was started. +Network utilization statistics are collected during UDP flows sent between the +VMs using pktgen as packet generator tool. The largest total number of packets +transmitted per second look similar on the first three runs with a minimum +number of 10 pps and a maximum number of 97 kpps, except the one on Sep. 15th, +in which the number of packets transmitted per second is 10 pps. Meanwhile, the +largest total number of packets received per second differs from each other, +in which the smallest number of packets received per second is 1 pps and the +largest of that is 276 kpps. + +In some test runs when running with less than approx. 90000 flows the PPS +throughput is normally flatter compared to when running with more flows, after +which the PPS throughput decreases. For the other test runs there is however no +significant change to the PPS throughput when the number of flows are +increased. In some test runs the PPS is also greater with 1000000 flows +compared to other test runs where the PPS result is less with only 2 flows. + +There are lost packets reported in most of the test runs. There is no observed +correlation between the amount of flows and the amount of lost packets. +The lost amount of packets normally differs a lot per test run. + +Detailed test results +--------------------- +The scenario was run on Intel POD5_ with: +Joid +OpenStack Mitaka +OpenVirtualSwitch 2.5.90 +OpenDayLight Beryllium + +Rationale for decisions +----------------------- +Pass + +Conclusions and recommendations +------------------------------- +Tests were successfully executed and metrics collected. +No SLA was verified. To be decided on in next release of OPNFV. diff --git a/docs/release/results/os-odl_l2-bgpvpn-ha.rst b/docs/release/results/os-odl_l2-bgpvpn-ha.rst new file mode 100644 index 000000000..2bd6dc35d --- /dev/null +++ b/docs/release/results/os-odl_l2-bgpvpn-ha.rst @@ -0,0 +1,53 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 + + +==================================== +Test Results for os-odl_l2-bgpvpn-ha +==================================== + +.. toctree:: + :maxdepth: 2 + + +fuel +==== + +.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main +.. _POD2: https://wiki.opnfv.org/pharos?&#community_test_labs + +Overview of test results +------------------------ + +See Grafana_ for viewing test result metrics for each respective test case. It +is possible to chose which specific scenarios to look at, and then to zoom in +on the details of each run test scenario as well. + +All of the test case results below are based on 4 scenario test runs, each run +on the Ericsson POD2_ between September 7 and 11 in 2016. + +TC043 +----- +The round-trip-time (RTT) between 2 nodes is measured using +ping. Most test run measurements result on average between 0.21 and 0.28 ms. +A few runs start with a 0.32 - 0.35 ms RTT spike (This could be because of +normal ARP handling). To be able to draw conclusions more runs should be made. +SLA set to 10 ms. The SLA value is used as a reference, it has not been defined +by OPNFV. + +Detailed test results +--------------------- +The scenario was run on Ericsson POD2_ with: +Fuel 9.0 +OpenStack Mitaka +OpenVirtualSwitch 2.5.90 +OpenDayLight Beryllium + +Rationale for decisions +----------------------- +Pass + +Tests were successfully executed and metrics collected. +No SLA was verified. To be decided on in next release of OPNFV. + diff --git a/docs/release/results/os-odl_l2-nofeature-ha.rst b/docs/release/results/os-odl_l2-nofeature-ha.rst new file mode 100644 index 000000000..ac0c5bb59 --- /dev/null +++ b/docs/release/results/os-odl_l2-nofeature-ha.rst @@ -0,0 +1,743 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 + + +======================================= +Test Results for os-odl_l2-nofeature-ha +======================================= + +.. toctree:: + :maxdepth: 2 + + +apex +==== + +.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main +.. _POD1: https://wiki.opnfv.org/pharos?&#community_test_labs + +Overview of test results +------------------------ + +See Grafana_ for viewing test result metrics for each respective test case. It +is possible to chose which specific scenarios to look at, and then to zoom in +on the details of each run test scenario as well. + +All of the test case results below are based on 4 scenario test runs, each run +on the LF POD1_ between September 14 and 17 in 2016. + +TC002 +----- +The round-trip-time (RTT) between 2 VMs on different blades is measured using +ping. Most test run measurements result on average between 0.49 ms and 0.60 ms. +Only one test run has reached greatest RTT spike of 0.93 ms. Meanwhile, the +smallest network latency is 0.33 ms, which is obtained on Sep. 14th. +SLA set to be 10 ms. The SLA value is used as a reference, it has not been +defined by OPNFV. + +TC005 +----- +The IO read bandwidth actually refers to the storage throughput, which is +measured by fio and the greatest IO read bandwidth of the four runs is 416 +MB/s. The IO read bandwidth of all four runs looks similar, with an average +between 128 and 131 MB/s. One of the runs has a minimum BW of 497 KB/s. The SLA +of read bandwidth sets to be 400 MB/s, which is used as a reference, and it has +not been defined by OPNFV. + +The results of storage IOPS for the four runs look similar with each other. The +IO read times per second of the four test runs have an average value at 1k per +second, and meanwhile, the minimum result is only 45 times per second. + +TC010 +----- +The tool we use to measure memory read latency is lmbench, which is a series of +micro benchmarks intended to measure basic operating system and hardware system +metrics. The memory read latency of the four runs is between 1.0859 ns and +1.0869 ns on average. The variations within each test run are quite different, +some vary from a large range and others have a small change. For example, the +largest change is on September 14th, the memory read latency of which is ranging +from 1.091 ns to 1.086 ns. However. +The SLA sets to be 30 ns. The SLA value is used as a reference, it has not been +defined by OPNFV. + +TC011 +----- +Packet delay variation between 2 VMs on different blades is measured using +Iperf3. On the first two test runs the reported packet delay variation varies between +0.0037 and 0.0740 ms, with an average delay variation between 0.0096 ms and 0.0321. +On the second date the delay variation varies between 0.0063 and 0.0096 ms, with +an average delay variation of 0.0124 - 0.0141 ms. + +TC012 +----- +Lmbench is also used to measure the memory read and write bandwidth, in which +we use bw_mem to obtain the results. Among the four test runs, the trend of +three memory bandwidth almost look similar, which all have a narrow range, and +the average result is 19.88 GB/s. Here SLA set to be 15 GB/s. The SLA value is +used as a reference, it has not been defined by OPNFV. + +TC014 +----- +The Unixbench is used to evaluate the IaaS processing speed with regards to +score of single cpu running and parallel running. It can be seen from the +dashboard that the processing test results vary from scores 3754k to 3831k, and +there is only one result one date. No SLA set. + +TC037 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The mean packet throughput of the four test runs is between 307.3 kpps and +447.1 kpps, of which the result of the third run is the highest. The RTT +results of all the test runs keep flat at approx. 15 ms. It is obvious that the +PPS results are not as consistent as the RTT results. + +The No. flows of the four test runs are 240 k on average and the PPS results +look a little waved since the largest packet throughput is 418.1 kpps and the +minimum throughput is 326.5 kpps respectively. + +There are no errors of packets received in the four runs, but there are still +lost packets in all the test runs. The RTT values obtained by ping of the four +runs have the similar average vaue, that is approx. 15 ms. + +CPU load is measured by mpstat, and CPU load of the four test runs seem a +little similar, since the minimum value and the peak of CPU load is between 0 +percent and nine percent respectively. And the best result is obtained on Sep. +1, with an CPU load of nine percent. But on the whole, the CPU load is very +poor, since the average value is quite small. + +TC069 +----- +With the block size changing from 1 kb to 512 kb, the memory write bandwidth +tends to become larger first and then smaller within every run test, which +rangs from 28.2 GB/s to 29.5 GB/s and then to 29.2 GB/s on average. Since the +test id is one, it is that only the INT memory write bandwidth is tested. On +the whole, when the block size is 2 kb or 16 kb, the memory write bandwidth +look similar with a minimal BW of 25.8 GB/s and peak value of 28.3 GB/s. And +then with the block size becoming larger, the memory write bandwidth tends to +decrease. SLA sets to be 7 GB/s. The SLA value is used as a reference, it has +not been defined by OPNFV. + +TC070 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The network latency is measured by ping, and the results of the four test runs +look similar with each other, and within these test runs, the maximum RTT can +reach 39 ms and the average RTT is usually approx. 15 ms. The network latency +tested on Sep. 1 and Sep. 8 have a peak latency of 39 ms. But on the whole, +the average RTTs of the five runs keep flat and the network latency is +relatively short. + +Memory utilization is measured by free, which can display amount of free and +used memory in the system. The largest amount of used memory is 267 MiB for the +four runs. In general, the four test runs have very large memory utilization, +which can reach 257 MiB on average. On the other hand, for the mean free memory, +the four test runs have the similar trend with that of the mean used memory. +In general, the mean free memory change from 233 MiB to 241 MiB. + +Packet throughput and packet loss can be measured by pktgen, which is a tool +in the network for generating traffic loads for network experiments. The mean +packet throughput of the four test runs seem quite different, ranging from +305.3 kpps to 447.1 kpps. The average number of flows in these tests is +240000, and each run has a minimum number of flows of 2 and a maximum number +of flows of 1.001 Mil. At the same time, the corresponding average packet +throughput is between 354.4 kpps and 381.8 kpps. In summary, the PPS results +seem consistent. Within each test run of the four runs, when number of flows +becomes larger, the packet throughput seems not larger at the same time. + +TC071 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The network latency is measured by ping, and the results of the four test runs +look similar with each other. Within each test run, the maximum RTT is only 42 +ms and the average RTT is usually approx. 15 ms. On the whole, the average +RTTs of the four runs keep stable and the network latency is relatively small. + +Cache utilization is measured by cachestat, which can display size of cache and +buffer in the system. Cache utilization statistics are collected during UDP +flows sent between the VMs using pktgen as packet generator tool. The largest +cache size is 212 MiB, which is same for the four runs, and the smallest cache +size is 75 MiB. On the whole, the average cache size of the four runs look the +same and is between 197 MiB and 211 MiB. Meanwhile, the tread of the buffer +size keep flat, since they have a minimum value of 7 MiB and a maximum value of +8 MiB, with an average value of about 7.9 MiB. + +Packet throughput can be measured by pktgen, which is a tool in the network for +generating traffic loads for network experiments. The mean packet throughput of +the four test runs differ from 354.4 kpps to 381.8 kpps. The average number of +flows in these tests is 240k, and each run has a minimum number of flows of 2 +and a maximum number of flows of 1.001 Mil. At the same time, the corresponding +packet throughput differ between 305.3 kpps to 447.1 kpps. Within each test run +of the four runs, when number of flows becomes larger, the packet throughput +seems not larger in the meantime. + +TC072 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The RTT results are similar throughout the different test dates and runs +between 0 ms and 42 ms with an average leatency of less than 15 ms. The PPS +results are not as consistent as the RTT results, for the mean packet +throughput of the four runs differ from 354.4 kpps to 381.8 kpps. + +Network utilization is measured by sar, that is system activity reporter, which +can display the average statistics for the time since the system was started. +Network utilization statistics are collected during UDP flows sent between the +VMs using pktgen as packet generator tool. The largest total number of packets +transmitted per second look similar for three test runs, whose values change a +lot from 10 pps to 501 kpps. While results of the rest test run seem the same +and keep stable with the average number of packets transmitted per second of 10 +pps. However, the total number of packets received per second of the four runs +look similar, which have a large wide range of 2 pps to 815 kpps. + +In some test runs when running with less than approx. 251000 flows the PPS +throughput is normally flatter compared to when running with more flows, after +which the PPS throughput decreases. For the other test runs there is however no +significant change to the PPS throughput when the number of flows are +increased. In some test runs the PPS is also greater with 251000 flows +compared to other test runs where the PPS result is less with only 2 flows. + +There are lost packets reported in most of the test runs. There is no observed +correlation between the amount of flows and the amount of lost packets. +The lost amount of packets normally differs a lot per test run. + +Detailed test results +--------------------- +The scenario was run on LF POD1_ with: +Apex +OpenStack Mitaka +OpenVirtualSwitch 2.5.90 +OpenDayLight Beryllium + +Rationale for decisions +----------------------- +Pass + +Conclusions and recommendations +------------------------------- +Tests were successfully executed and metrics collected. +No SLA was verified. To be decided on in next release of OPNFV. + + + +fuel +==== + +.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main +.. _POD2: https://wiki.opnfv.org/pharos?&#community_test_labs + +Overview of test results +------------------------ + +See Grafana_ for viewing test result metrics for each respective test case. It +is possible to chose which specific scenarios to look at, and then to zoom in +on the details of each run test scenario as well. + +All of the test case results below are based on 4 scenario test runs, each run +on the Ericsson POD2_ or LF POD2_ between August 25 and 29 in 2016. + +TC002 +----- +The round-trip-time (RTT) between 2 VMs on different blades is measured using +ping. Most test run measurements result on average between 0.5 and 0.6 ms. +A few runs start with a 1 - 1.5 ms RTT spike (This could be because of normal ARP +handling). One test run has a greater RTT spike of 1.9 ms, which is the same +one with the 0.7 ms average. The other runs have no similar spike at all. +To be able to draw conclusions more runs should be made. +SLA set to 10 ms. The SLA value is used as a reference, it has not +been defined by OPNFV. + +TC005 +----- +The IO read bandwidth looks similar between different dates, with an +average between approx. 170 and 200 MB/s. Within each test run the results +vary, with a minimum 2 MB/s and maximum 838 MB/s on the totality. Most runs +have a minimum BW of 3 MB/s (two runs at 2 MB/s). The maximum BW varies more in +absolute numbers between the dates, between 617 and 838 MB/s. +SLA set to 400 MB/s. The SLA value is used as a reference, it has not been +defined by OPNFV. + +TC010 +----- +The measurements for memory latency are similar between test dates and result +in approx. 1.2 ns. The variations within each test run are similar, between +1.215 and 1.219 ns. One exception is February 16, where the average is 1.222 +and varies between 1.22 and 1.28 ns. +SLA set to 30 ns. The SLA value is used as a reference, it has not been defined +by OPNFV. + +TC011 +----- +Packet delay variation between 2 VMs on different blades is measured using +Iperf3. On the first date the reported packet delay variation varies between +0.0025 and 0.011 ms, with an average delay variation of 0.0067 ms. +On the second date the delay variation varies between 0.002 and 0.006 ms, with +an average delay variation of 0.004 ms. + +TC012 +----- +Between test dates, the average measurements for memory bandwidth vary between +17.4 and 17.9 GB/s. Within each test run the results vary more, with a minimal +BW of 16.4 GB/s and maximum of 18.2 GB/s on the totality. +SLA set to 15 GB/s. The SLA value is used as a reference, it has not been +defined by OPNFV. + +TC014 +----- +The Unixbench processor test run results vary between scores 3080 and 3240, +one result each date. The average score on the total is 3150. +No SLA set. + +TC037 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The RTT results are similar throughout the different test dates and runs at +approx. 15 ms. Some test runs show an increase with many flows, in the range +towards 16 to 17 ms. One exception standing out is Feb. 15 where the average +RTT is stable at approx. 13 ms. The PPS results are not as consistent as the +RTT results. +In some test runs when running with less than approx. 10000 flows the PPS +throughput is normally flatter compared to when running with more flows, after +which the PPS throughput decreases. Around 20 percent decrease in the worst +case. For the other test runs there is however no significant change to the PPS +throughput when the number of flows are increased. In some test runs the PPS +is also greater with 1000000 flows compared to other test runs where the PPS +result is less with only 2 flows. + +The average PPS throughput in the different runs varies between 414000 and +452000 PPS. The total amount of packets in each test run is approx. 7500000 to +8200000 packets. One test run Feb. 15 sticks out with a PPS average of +558000 and approx. 1100000 packets in total (same as the on mentioned earlier +for RTT results). + +There are lost packets reported in most of the test runs. There is no observed +correlation between the amount of flows and the amount of lost packets. +The lost amount of packets normally range between 100 and 1000 per test run, +but there are spikes in the range of 10000 lost packets as well, and even +more in a rare cases. + +CPU utilization statistics are collected during UDP flows sent between the VMs +using pktgen as packet generator tool. The average measurements for CPU +utilization ratio vary between 1% to 2%. The peak of CPU utilization ratio +appears around 7%. + +TC069 +----- +Between test dates, the average measurements for memory bandwidth vary between +15.5 and 25.4 GB/s. Within each test run the results vary more, with a minimal +BW of 9.7 GB/s and maximum of 29.5 GB/s on the totality. +SLA set to 6 GB/s. The SLA value is used as a reference, it has not been +defined by OPNFV. + +TC070 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The RTT results are similar throughout the different test dates and runs at +approx. 15 ms. Some test runs show an increase with many flows, in the range +towards 16 to 17 ms. One exception standing out is Feb. 15 where the average +RTT is stable at approx. 13 ms. The PPS results are not as consistent as the +RTT results. +In some test runs when running with less than approx. 10000 flows the PPS +throughput is normally flatter compared to when running with more flows, after +which the PPS throughput decreases. Around 20 percent decrease in the worst +case. For the other test runs there is however no significant change to the PPS +throughput when the number of flows are increased. In some test runs the PPS +is also greater with 1000000 flows compared to other test runs where the PPS +result is less with only 2 flows. + +The average PPS throughput in the different runs varies between 414000 and +452000 PPS. The total amount of packets in each test run is approx. 7500000 to +8200000 packets. One test run Feb. 15 sticks out with a PPS average of +558000 and approx. 1100000 packets in total (same as the on mentioned earlier +for RTT results). + +There are lost packets reported in most of the test runs. There is no observed +correlation between the amount of flows and the amount of lost packets. +The lost amount of packets normally range between 100 and 1000 per test run, +but there are spikes in the range of 10000 lost packets as well, and even +more in a rare cases. + +Memory utilization statistics are collected during UDP flows sent between the +VMs using pktgen as packet generator tool. The average measurements for memory +utilization vary between 225MB to 246MB. The peak of memory utilization appears +around 340MB. + +TC071 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The RTT results are similar throughout the different test dates and runs at +approx. 15 ms. Some test runs show an increase with many flows, in the range +towards 16 to 17 ms. One exception standing out is Feb. 15 where the average +RTT is stable at approx. 13 ms. The PPS results are not as consistent as the +RTT results. +In some test runs when running with less than approx. 10000 flows the PPS +throughput is normally flatter compared to when running with more flows, after +which the PPS throughput decreases. Around 20 percent decrease in the worst +case. For the other test runs there is however no significant change to the PPS +throughput when the number of flows are increased. In some test runs the PPS +is also greater with 1000000 flows compared to other test runs where the PPS +result is less with only 2 flows. + +The average PPS throughput in the different runs varies between 414000 and +452000 PPS. The total amount of packets in each test run is approx. 7500000 to +8200000 packets. One test run Feb. 15 sticks out with a PPS average of +558000 and approx. 1100000 packets in total (same as the on mentioned earlier +for RTT results). + +There are lost packets reported in most of the test runs. There is no observed +correlation between the amount of flows and the amount of lost packets. +The lost amount of packets normally range between 100 and 1000 per test run, +but there are spikes in the range of 10000 lost packets as well, and even +more in a rare cases. + +Cache utilization statistics are collected during UDP flows sent between the +VMs using pktgen as packet generator tool. The average measurements for cache +utilization vary between 205MB to 212MB. + +TC072 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The RTT results are similar throughout the different test dates and runs at +approx. 15 ms. Some test runs show an increase with many flows, in the range +towards 16 to 17 ms. One exception standing out is Feb. 15 where the average +RTT is stable at approx. 13 ms. The PPS results are not as consistent as the +RTT results. +In some test runs when running with less than approx. 10000 flows the PPS +throughput is normally flatter compared to when running with more flows, after +which the PPS throughput decreases. Around 20 percent decrease in the worst +case. For the other test runs there is however no significant change to the PPS +throughput when the number of flows are increased. In some test runs the PPS +is also greater with 1000000 flows compared to other test runs where the PPS +result is less with only 2 flows. + +The average PPS throughput in the different runs varies between 414000 and +452000 PPS. The total amount of packets in each test run is approx. 7500000 to +8200000 packets. One test run Feb. 15 sticks out with a PPS average of +558000 and approx. 1100000 packets in total (same as the on mentioned earlier +for RTT results). + +There are lost packets reported in most of the test runs. There is no observed +correlation between the amount of flows and the amount of lost packets. +The lost amount of packets normally range between 100 and 1000 per test run, +but there are spikes in the range of 10000 lost packets as well, and even +more in a rare cases. + +Network utilization statistics are collected during UDP flows sent between the +VMs using pktgen as packet generator tool. Total number of packets received per +second was average on 200 kpps and total number of packets transmitted per +second was average on 600 kpps. + +Detailed test results +--------------------- +The scenario was run on Ericsson POD2_ and LF POD2_ with: +Fuel 9.0 +OpenStack Mitaka +OpenVirtualSwitch 2.5.90 +OpenDayLight Beryllium + +Rationale for decisions +----------------------- +Pass + +Tests were successfully executed and metrics collected. +No SLA was verified. To be decided on in next release of OPNFV. + +Conclusions and recommendations +------------------------------- +The pktgen test configuration has a relatively large base effect on RTT in +TC037 compared to TC002, where there is no background load at all. Approx. +15 ms compared to approx. 0.5 ms, which is more than a 3000 percentage +difference in RTT results. +Especially RTT and throughput come out with better results than for instance +the *fuel-os-nosdn-nofeature-ha* scenario does. The reason for this should +probably be further analyzed and understood. Also of interest could be +to make further analyzes to find patterns and reasons for lost traffic. +Also of interest could be to see if there are continuous variations where +some test cases stand out with better or worse results than the general test +case. + + + +Joid +===== + +.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main +.. _POD6: https://wiki.opnfv.org/pharos?&#community_test_labs + +Overview of test results +------------------------ + +See Grafana_ for viewing test result metrics for each respective test case. It +is possible to chose which specific scenarios to look at, and then to zoom in +on the details of each run test scenario as well. + +All of the test case results below are based on 4 scenario test runs, each run +on the Intel POD6_ between September 1 and 8 in 2016. + +TC002 +----- +The round-trip-time (RTT) between 2 VMs on different blades is measured using +ping. Most test run measurements result on average between 1.01 ms and 1.88 ms. +Only one test run has reached greatest RTT spike of 1.88 ms. Meanwhile, the +smallest network latency is 1.01 ms, which is obtained on Sep. 1st. In general, +the average of network latency of the four test runs are between 1.29 ms and +1.34 ms. SLA set to be 10 ms. The SLA value is used as a reference, it has not +been defined by OPNFV. + +TC005 +----- +The IO read bandwidth actually refers to the storage throughput, which is +measured by fio and the greatest IO read bandwidth of the four runs is 183.65 +MB/s. The IO read bandwidth of the three runs looks similar, with an average +between 62.9 and 64.3 MB/s, except one on Sep. 1, for its maximum storage +throughput is only 159.1 MB/s. One of the runs has a minimum BW of 685 KB/s and +other has a maximum BW of 183.6 MB/s. The SLA of read bandwidth sets to be +400 MB/s, which is used as a reference, and it has not been defined by OPNFV. + +The results of storage IOPS for the four runs look similar with each other. The +IO read times per second of the four test runs have an average value between +1.41k per second and 1.64k per second, and meanwhile, the minimum result is +only 55 times per second. + +TC010 +----- +The tool we use to measure memory read latency is lmbench, which is a series of +micro benchmarks intended to measure basic operating system and hardware system +metrics. The memory read latency of the four runs is between 1.152 ns and 1.179 +ns on average. The variations within each test run are quite different, some +vary from a large range and others have a small change. For example, the +largest change is on September 8, the memory read latency of which is ranging +from 1.120 ns to 1.221 ns. However, the results on September 7 change very +little. The SLA sets to be 30 ns. The SLA value is used as a reference, it has +not been defined by OPNFV. + +TC011 +----- +Iperf3 is a tool for evaluating the packet delay variation between 2 VMs on +different blades. The reported packet delay variations of the four test runs +differ from each other. In general, the packet delay of the first two runs look +similar, for they both stay stable within each run. And the mean packet delay +of them are 0.0087 ms and 0.0127 ms respectively. Of the four runs, the fourth +has the worst result, because the packet delay reaches 0.0187 ms. The SLA value +sets to be 10 ms. The SLA value is used as a reference, it has not been defined +by OPNFV. + +TC012 +----- +Lmbench is also used to measure the memory read and write bandwidth, in which +we use bw_mem to obtain the results. Among the four test runs, the trend of +three memory bandwidth almost look similar, which all have a narrow range, and +the average result is 11.78 GB/s. Here SLA set to be 15 GB/s. The SLA value is +used as a reference, it has not been defined by OPNFV. + +TC014 +----- +The Unixbench is used to evaluate the IaaS processing speed with regards to +score of single cpu running and parallel running. It can be seen from the +dashboard that the processing test results vary from scores 3260k to 3328k, and +there is only one result one date. No SLA set. + +TC037 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The mean packet throughput of the four test runs is between 307.3 kpps and +447.1 kpps, of which the result of the third run is the highest. The RTT +results of all the test runs keep flat at approx. 15 ms. It is obvious that the +PPS results are not as consistent as the RTT results. + +The No. flows of the four test runs are 240 k on average and the PPS results +look a little waved since the largest packet throughput is 418.1 kpps and the +minimum throughput is 326.5 kpps respectively. + +There are no errors of packets received in the four runs, but there are still +lost packets in all the test runs. The RTT values obtained by ping of the four +runs have the similar average vaue, that is approx. 15 ms. + +CPU load is measured by mpstat, and CPU load of the four test runs seem a +little similar, since the minimum value and the peak of CPU load is between 0 +percent and nine percent respectively. And the best result is obtained on Sep. +1, with an CPU load of nine percent. But on the whole, the CPU load is very +poor, since the average value is quite small. + +TC069 +----- +With the block size changing from 1 kb to 512 kb, the memory write bandwidth +tends to become larger first and then smaller within every run test, which +rangs from 21.9 GB/s to 25.9 GB/s and then to 17.8 GB/s on average. Since the +test id is one, it is that only the INT memory write bandwidth is tested. On +the whole, when the block size is 2 kb or 16 kb, the memory write bandwidth +look similar with a minimal BW of 24.8 GB/s and peak value of 27.8 GB/s. And +then with the block size becoming larger, the memory write bandwidth tends to +decrease. SLA sets to be 7 GB/s. The SLA value is used as a reference, it has +not been defined by OPNFV. + +TC070 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The network latency is measured by ping, and the results of the four test runs +look similar with each other, and within these test runs, the maximum RTT can +reach 39 ms and the average RTT is usually approx. 15 ms. The network latency +tested on Sep. 1 and Sep. 8 have a peak latency of 39 ms. But on the whole, +the average RTTs of the five runs keep flat and the network latency is +relatively short. + +Memory utilization is measured by free, which can display amount of free and +used memory in the system. The largest amount of used memory is 267 MiB for the +four runs. In general, the four test runs have very large memory utilization, +which can reach 257 MiB on average. On the other hand, for the mean free memory, +the four test runs have the similar trend with that of the mean used memory. +In general, the mean free memory change from 233 MiB to 241 MiB. + +Packet throughput and packet loss can be measured by pktgen, which is a tool +in the network for generating traffic loads for network experiments. The mean +packet throughput of the four test runs seem quite different, ranging from +305.3 kpps to 447.1 kpps. The average number of flows in these tests is +240000, and each run has a minimum number of flows of 2 and a maximum number +of flows of 1.001 Mil. At the same time, the corresponding average packet +throughput is between 354.4 kpps and 381.8 kpps. In summary, the PPS results +seem consistent. Within each test run of the four runs, when number of flows +becomes larger, the packet throughput seems not larger at the same time. + +TC071 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The network latency is measured by ping, and the results of the four test runs +look similar with each other. Within each test run, the maximum RTT is only 42 +ms and the average RTT is usually approx. 15 ms. On the whole, the average +RTTs of the four runs keep stable and the network latency is relatively small. + +Cache utilization is measured by cachestat, which can display size of cache and +buffer in the system. Cache utilization statistics are collected during UDP +flows sent between the VMs using pktgen as packet generator tool. The largest +cache size is 212 MiB, which is same for the four runs, and the smallest cache +size is 75 MiB. On the whole, the average cache size of the four runs look the +same and is between 197 MiB and 211 MiB. Meanwhile, the tread of the buffer +size keep flat, since they have a minimum value of 7 MiB and a maximum value of +8 MiB, with an average value of about 7.9 MiB. + +Packet throughput can be measured by pktgen, which is a tool in the network for +generating traffic loads for network experiments. The mean packet throughput of +the four test runs differ from 354.4 kpps to 381.8 kpps. The average number of +flows in these tests is 240k, and each run has a minimum number of flows of 2 +and a maximum number of flows of 1.001 Mil. At the same time, the corresponding +packet throughput differ between 305.3 kpps to 447.1 kpps. Within each test run +of the four runs, when number of flows becomes larger, the packet throughput +seems not larger in the meantime. + +TC072 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The RTT results are similar throughout the different test dates and runs +between 0 ms and 42 ms with an average leatency of less than 15 ms. The PPS +results are not as consistent as the RTT results, for the mean packet +throughput of the four runs differ from 354.4 kpps to 381.8 kpps. + +Network utilization is measured by sar, that is system activity reporter, which +can display the average statistics for the time since the system was started. +Network utilization statistics are collected during UDP flows sent between the +VMs using pktgen as packet generator tool. The largest total number of packets +transmitted per second look similar for three test runs, whose values change a +lot from 10 pps to 501 kpps. While results of the rest test run seem the same +and keep stable with the average number of packets transmitted per second of 10 +pps. However, the total number of packets received per second of the four runs +look similar, which have a large wide range of 2 pps to 815 kpps. + +In some test runs when running with less than approx. 251000 flows the PPS +throughput is normally flatter compared to when running with more flows, after +which the PPS throughput decreases. For the other test runs there is however no +significant change to the PPS throughput when the number of flows are +increased. In some test runs the PPS is also greater with 251000 flows +compared to other test runs where the PPS result is less with only 2 flows. + +There are lost packets reported in most of the test runs. There is no observed +correlation between the amount of flows and the amount of lost packets. +The lost amount of packets normally differs a lot per test run. + +Detailed test results +--------------------- +The scenario was run on Intel POD6_ with: +Joid +OpenStack Mitaka +OpenVirtualSwitch 2.5.90 +OpenDayLight Beryllium + +Rationale for decisions +----------------------- +Pass + +Conclusions and recommendations +------------------------------- +Tests were successfully executed and metrics collected. +No SLA was verified. To be decided on in next release of OPNFV. + diff --git a/docs/release/results/os-odl_l2-sfc-ha.rst b/docs/release/results/os-odl_l2-sfc-ha.rst new file mode 100644 index 000000000..e27562cae --- /dev/null +++ b/docs/release/results/os-odl_l2-sfc-ha.rst @@ -0,0 +1,231 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 + + +================================== +Test Results for os-odl_l2-sfc-ha +================================== + +.. toctree:: + :maxdepth: 2 + + +Fuel +===== + +.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main +.. _POD2: https://wiki.opnfv.org/pharos?&#community_test_labs + +Overview of test results +------------------------ + +See Grafana_ for viewing test result metrics for each respective test case. It +is possible to chose which specific scenarios to look at, and then to zoom in +on the details of each run test scenario as well. + +All of the test case results below are based on 4 scenario test runs, each run +on the LF POD2_ or Ericsson POD2_ between September 16 and 20 in 2016. + +TC002 +----- +The round-trip-time (RTT) between 2 VMs on different blades is measured using +ping. Most test run measurements result on average between 0.32 ms and 1.42 ms. +Only one test run on Sep. 20 has reached greatest RTT spike of 4.66 ms. +Meanwhile, the smallest network latency is 0.16 ms, which is obtained on Sep. +17th. To sum up, the curve of network latency has very small wave, which is +less than 5 ms. SLA sets to be 10 ms. The SLA value is used as a reference, it +has not been defined by OPNFV. + +TC005 +----- +The IO read bandwidth actually refers to the storage throughput, which is +measured by fio and the greatest IO read bandwidth of the four runs is 734 +MB/s. The IO read bandwidth of the first three runs looks similar, with an +average of less than 100 KB/s, except one on Sep. 20, whose maximum storage +throughput can reach 734 MB/s. The SLA of read bandwidth sets to be 400 MB/s, +which is used as a reference, and it has not been defined by OPNFV. + +The results of storage IOPS for the four runs look similar with each other. The +IO read times per second of the four test runs have an average value between +1.8k per second and 3.27k per second, and meanwhile, the minimum result is +only 60 times per second. + +TC010 +----- +The tool we use to measure memory read latency is lmbench, which is a series of +micro benchmarks intended to measure basic operating system and hardware system +metrics. The memory read latency of the four runs is between 1.085 ns and 1.218 +ns on average. The variations within each test run are quite small. For +Ericsson pod2, the average of memory latency is approx. 1.217 ms. While for LF +pod2, the average value is about 1.085 ms. It can be seen that the performance +of LF is better than Ericsson's. The SLA sets to be 30 ns. The SLA value is +used as a reference, it has not been defined by OPNFV. + +TC012 +----- +Lmbench is also used to measure the memory read and write bandwidth, in which +we use bw_mem to obtain the results. The four test runs all have a narrow range +of change with the average memory and write BW of 18.5 GB/s. Here SLA set to be +15 GB/s. The SLA value is used as a reference, it has not been defined by OPNFV. + +TC014 +----- +The Unixbench is used to evaluate the IaaS processing speed with regards to +score of single cpu running and parallel running. It can be seen from the +dashboard that the processing test results vary from scores 3209k to 3843k, and +there is only one result one date. No SLA set. + +TC037 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The mean packet throughput of the three test runs is between 439 kpps and +582 kpps, and the test run on Sep. 17th has the lowest average value of 371 +kpps. The RTT results of all the test runs keep flat at approx. 10 ms. It is +obvious that the PPS results are not as consistent as the RTT results. + +The No. flows of the four test runs are 240 k on average and the PPS results +look a little waved, since the largest packet throughput is 680 kpps and the +minimum throughput is 319 kpps respectively. + +There are no errors of packets received in the four runs, but there are still +lost packets in all the test runs. The RTT values obtained by ping of the four +runs have the similar trend of RTT with the average value of approx. 12 ms. + +CPU load is measured by mpstat, and CPU load of the four test runs seem a +little similar, since the minimum value and the peak of CPU load is between 0 +percent and ten percent respectively. And the best result is obtained on Sep. +17th, with an CPU load of ten percent. But on the whole, the CPU load is very +poor, since the average value is quite small. + +TC069 +----- +With the block size changing from 1 kb to 512 kb, the average memory write +bandwidth tends to become larger first and then smaller within every run test +for the two pods, which rangs from 25.1 GB/s to 29.4 GB/s and then to 19.2 GB/s +on average. Since the test id is one, it is that only the INT memory write +bandwidth is tested. On the whole, with the block size becoming larger, the +memory write bandwidth tends to decrease. SLA sets to be 7 GB/s. The SLA value +is used as a reference, it has not been defined by OPNFV. + +TC070 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The network latency is measured by ping, and the results of the four test runs +look similar with each other, and within these test runs, the maximum RTT can +reach 27 ms and the average RTT is usually approx. 12 ms. The network latency +tested on Sep. 27th has a peak latency of 27 ms. But on the whole, the average +RTTs of the four runs keep flat. + +Memory utilization is measured by free, which can display amount of free and +used memory in the system. The largest amount of used memory is 269 MiB for the +four runs. In general, the four test runs have very large memory utilization, +which can reach 251 MiB on average. On the other hand, for the mean free memory, +the four test runs have the similar trend with that of the mean used memory. +In general, the mean free memory change from 231 MiB to 248 MiB. + +Packet throughput and packet loss can be measured by pktgen, which is a tool +in the network for generating traffic loads for network experiments. The mean +packet throughput of the four test runs seem quite different, ranging from +371 kpps to 582 kpps. The average number of flows in these tests is +240000, and each run has a minimum number of flows of 2 and a maximum number +of flows of 1.001 Mil. At the same time, the corresponding average packet +throughput is between 319 kpps and 680 kpps. In summary, the PPS results +seem consistent. Within each test run of the four runs, when number of flows +becomes larger, the packet throughput seems not larger at the same time. + +TC071 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The network latency is measured by ping, and the results of the four test runs +look similar with each other. Within each test run, the maximum RTT is only 24 +ms and the average RTT is usually approx. 12 ms. On the whole, the average +RTTs of the four runs keep stable and the network latency is relatively small. + +Cache utilization is measured by cachestat, which can display size of cache and +buffer in the system. Cache utilization statistics are collected during UDP +flows sent between the VMs using pktgen as packet generator tool. The largest +cache size is 213 MiB, and the smallest cache size is 99 MiB, which is same for +the four runs. On the whole, the average cache size of the four runs look the +same and is between 184 MiB and 205 MiB. Meanwhile, the tread of the buffer +size keep stable, since they have a minimum value of 7 MiB and a maximum value of +8 MiB. + +Packet throughput can be measured by pktgen, which is a tool in the network for +generating traffic loads for network experiments. The mean packet throughput of +the four test runs differ from 371 kpps to 582 kpps. The average number of +flows in these tests is 240k, and each run has a minimum number of flows of 2 +and a maximum number of flows of 1.001 Mil. At the same time, the corresponding +packet throughput differ between 319 kpps to 680 kpps. Within each test run +of the four runs, when number of flows becomes larger, the packet throughput +seems not larger in the meantime. + +TC072 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The RTT results are similar throughout the different test dates and runs +between 0 ms and 24 ms with an average leatency of less than 13 ms. The PPS +results are not as consistent as the RTT results, for the mean packet +throughput of the four runs differ from 370 kpps to 582 kpps. + +Network utilization is measured by sar, that is system activity reporter, which +can display the average statistics for the time since the system was started. +Network utilization statistics are collected during UDP flows sent between the +VMs using pktgen as packet generator tool. The largest total number of packets +transmitted per second look similar for the four test runs, whose values change a +lot from 10 pps to 697 kpps. However, the total number of packets received per +second of three runs look similar, which have a large wide range of 2 pps to +1.497 Mpps, while the results on Sep. 18th and 20th have very small maximum +number of packets received per second of 817 kpps. + +In some test runs when running with less than approx. 251000 flows the PPS +throughput is normally flatter compared to when running with more flows, after +which the PPS throughput decreases. For the other test runs there is however no +significant change to the PPS throughput when the number of flows are +increased. In some test runs the PPS is also greater with 251000 flows +compared to other test runs where the PPS result is less with only 2 flows. + +There are lost packets reported in most of the test runs. There is no observed +correlation between the amount of flows and the amount of lost packets. +The lost amount of packets normally differs a lot per test run. + +Detailed test results +--------------------- +The scenario was run on Ericsson POD2_ and LF POD2_ with: +Fuel 9.0 +OpenStack Mitaka +OpenVirtualSwitch 2.5.90 +OpenDayLight Beryllium + +Rationale for decisions +----------------------- +Pass + +Conclusions and recommendations +------------------------------- +Tests were successfully executed and metrics collected. +No SLA was verified. To be decided on in next release of OPNFV. diff --git a/docs/release/results/os-onos-nofeature-ha.rst b/docs/release/results/os-onos-nofeature-ha.rst new file mode 100644 index 000000000..d8b3ace5f --- /dev/null +++ b/docs/release/results/os-onos-nofeature-ha.rst @@ -0,0 +1,257 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 + + +====================================== +Test Results for os-onos-nofeature-ha +====================================== + +.. toctree:: + :maxdepth: 2 + + +Joid +===== + +.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main +.. _POD6: https://wiki.opnfv.org/pharos?&#community_test_labs + +Overview of test results +------------------------ + +See Grafana_ for viewing test result metrics for each respective test case. It +is possible to chose which specific scenarios to look at, and then to zoom in +on the details of each run test scenario as well. + +All of the test case results below are based on 5 scenario test runs, each run +on the Intel POD6_ between September 13 and 16 in 2016. + +TC002 +----- +The round-trip-time (RTT) between 2 VMs on different blades is measured using +ping. Most test run measurements result on average between 1.50 and 1.68 ms. +Only one test run has reached greatest RTT spike of 2.62 ms, which has +the smallest RTT of 1.00 ms. The other four runs have no similar spike at all, +the minimum and average RTTs of which are approx. 1.06 ms and 1.32 ms. SLA set +to be 10 ms. The SLA value is used as a reference, it has not been defined by +OPNFV. + +TC005 +----- +The IO read bandwidth actually refers to the storage throughput, which is +measured by fio and the greatest IO read bandwidth of the four runs is 175.4 +MB/s. The IO read bandwidth of the four runs looks similar on different four +days, with an average between 58.1 and 62.0 MB/s, except one on Sep. 14, for +its maximum storage throughput is only 133.0 MB/s. One of the runs has a +minimum BW of 497 KM/s and other has a maximum BW of 177.4 MB/s. The SLA of read +bandwidth sets to be 400 MB/s, which is used as a reference, and it has not +been defined by OPNFV. + +The results of storage IOPS for the five runs look similar with each other. The +IO read times per second of the five test runs have an average value between +1.20 K/s and 1.61 K/s, and meanwhile, the minimum result is only 41 times per +second. + +TC010 +----- +The tool we use to measure memory read latency is lmbench, which is a series of +micro benchmarks intended to measure basic operating system and hardware system +metrics. The memory read latency of the five runs is between 1.146 ns and 1.172 +ns on average. The variations within each test run are quite different, some +vary from a large range and others have a small change. For example, the +largest change is on September 13, the memory read latency of which is ranging +from 1.152 ns to 1.221 ns. However, the results on September 14 change very +little. The SLA sets to be 30 ns. The SLA value is used as a reference, it has +not been defined by OPNFV. + +TC011 +----- +Iperf3 is a tool for evaluating the packet delay variation between 2 VMs on +different blades. The reported packet delay variations of the five test runs +differ from each other. In general, the packet delay of the first two runs look +similar, for they both stay stable within each run. And the mean packet delay of +of them are 0.07714 ms and 0.07982 ms respectively. Of the five runs, the third +has the worst result, because the packet delay reaches 0.08384 ms. The trend of +therest two runs look the same, for the average packet delay are 0.07808 ms and +0.07727 ms respectively. The SLA value sets to be 10 ms. The SLA value is used +as a reference, it has not been defined by OPNFV. + +TC012 +----- +Lmbench is also used to measure the memory read and write bandwidth, in which +we use bw_mem to obtain the results. Among the five test runs, the memory +bandwidth of last three test runs almost keep stable within each run, which is +11.64, 11.71 and 11.61 GB/s on average. However, the memory read and write +bandwidth on Sep. 13 has a large range, for it ranges from 6.68 GB/s to 11.73 +GB/s. Here SLA set to be 15 GB/s. The SLA value is used as a reference, it has +not been defined by OPNFV. + +TC014 +----- +The Unixbench is used to evaluate the IaaS processing speed with regards to +score of single cpu running and parallel running. It can be seen from the +dashboard that the processing test results vary from scores 3208 to 3314, and +there is only one result one date. No SLA set. + +TC037 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The mean packet throughput of the five test runs is between 259.6 kpps and +318.4 kpps, of which the result of the second run is the highest. The RTT +results of all the test runs keep flat at approx. 20 ms. It is obvious that the +PPS results are not as consistent as the RTT results. + +The No. flows of the five test runs are 240 k on average and the PPS results +look a little waved since the largest packet throughput is 398.9 kpps and the +minimum throughput is 250.6 kpps respectively. + +There are no errors of packets received in the five runs, but there are still +lost packets in all the test runs. The RTT values obtained by ping of the five +runs have the similar average vaue, that is between 17 ms and 22 ms, of which +the worest RTT is 53 ms on Sep. 14th. + +CPU load is measured by mpstat, and CPU load of the four test runs seem a +little similar, since the minimum value and the peak of CPU load is between 0 +percent and 10 percent respectively. And the best result is obtained on Sep. +13rd, with an CPU load of 10 percent. + +TC069 +----- +With the block size changing from 1 kb to 512 kb, the memory write bandwidth +tends to become larger first and then smaller within every run test, which +rangs from 21.6 GB/s to 26.8 GB/s and then to 18.4 GB/s on average. Since the +test id is one, it is that only the INT memory write bandwidth is tested. On +the whole, when the block size is 8 kb and 16 kb, the memory write bandwidth +look similar with a minimal BW of 23.0 GB/s and peak value of 28.6 GB/s. And +then with the block size becoming larger, the memory write bandwidth tends to +decrease. SLA sets to be 7 GB/s. The SLA value is used as a a reference, it has +not been defined by OPNFV. + +TC070 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The network latency is measured by ping, and the results of the five test runs +look similar with each other, and within these test runs, the maximum RTT can +reach 53 ms and the average RTT is usually approx. 18 ms. The network latency +tested on Sep. 14 shows that it has a peak latency of 53 ms. But on the whole, +the average RTTs of the five runs keep flat and the network latency is +relatively short. + +Memory utilization is measured by free, which can display amount of free and +used memory in the system. The largest amount of used memory is 272 MiB on Sep +14. In general, the mean used memory of the five test runs have the similar +trend and the minimum memory used size is approx. 150 MiB, and the average +used memory size is about 250 MiB. On the other hand, for the mean free memory, +the five test runs have the similar trend, whose mean free memory change from +218 MiB to 342 MiB, with an average value of approx. 38 MiB. + +Packet throughput and packet loss can be measured by pktgen, which is a tool +in the network for generating traffic loads for network experiments. The mean +packet throughput of the five test runs seem quite different, ranging from +285.29 kpps to 297.76 kpps. The average number of flows in these tests is +240000, and each run has a minimum number of flows of 2 and a maximum number +of flows of 1.001 Mil. At the same time, the corresponding packet throughput +differ between 250.6k and 398.9k with an average packet throughput between +277.2 K and 318.4 K. In summary, the PPS results seem consistent. Within each +test run of the five runs, when number of flows becomes larger, the packet +throughput seems not larger at the same time. + +TC071 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The network latency is measured by ping, and the results of the five test runs +look similar with each other. Within each test run, the maximum RTT is only 49 +ms and the average RTT is usually approx. 20 ms. On the whole, the average +RTTs of the five runs keep stable and the network latency is relatively short. + +Cache utilization is measured by cachestat, which can display size of cache and +buffer in the system. Cache utilization statistics are collected during UDP +flows sent between the VMs using pktgen as packet generator tool.The largest +cache size is 215 MiB in the four runs, and the smallest cache size is 95 MiB. +On the whole, the average cache size of the five runs change a little and is +about 200 MiB, except the one on Sep. 14th, the mean cache size is very small, +which keeps 102 MiB. Meanwhile, the tread of the buffer size keep flat, since +they have a minimum value of 7 MiB and a maximum value of 8 MiB, with an +average value of about 7.8 MiB. + +Packet throughput can be measured by pktgen, which is a tool in the network for +generating traffic loads for network experiments. The mean packet throughput of +the four test runs seem quite different, ranging from 285.29 kpps to 297.76 +kpps. The average number of flows in these tests is 239.7k, and each run has a +minimum number of flows of 2 and a maximum number of flows of 1.001 Mil. At the +same time, the corresponding packet throughput differ between 227.3k and 398.9k +with an average packet throughput between 277.2k and 318.4k. Within each test +run of the five runs, when number of flows becomes larger, the packet +throughput seems not larger in the meantime. + +TC072 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The RTT results are similar throughout the different test dates and runs + between 0 ms and 49 ms with an average leatency of less than 22 ms. The PPS +results are not as consistent as the RTT results, for the mean packet +throughput of the five runs differ from 250.6 kpps to 398.9 kpps. + +Network utilization is measured by sar, that is system activity reporter, which +can display the average statistics for the time since the system was started. +Network utilization statistics are collected during UDP flows sent between the +VMs using pktgen as packet generator tool. The largest total number of packets +transmitted per second look similar for four test runs, whose values change a +lot from 10 pps to 399 kpps, except the one on Sep. 14th, whose total number +of transmitted per second keep stable, that is 10 pps. Similarly, the total +number of packets received per second look the same for four runs, except the +one on Sep. 14th, whose value is only 10 pps. + +In some test runs when running with less than approx. 90000 flows the PPS +throughput is normally flatter compared to when running with more flows, after +which the PPS throughput decreases. For the other test runs there is however no +significant change to the PPS throughput when the number of flows are +increased. In some test runs the PPS is also greater with 250000 flows +compared to other test runs where the PPS result is less with only 2 flows. + +There are lost packets reported in most of the test runs. There is no observed +correlation between the amount of flows and the amount of lost packets. +The lost amount of packets normally differs a lot per test run. + +Detailed test results +--------------------- +The scenario was run on Intel POD6_ with: +Joid +OpenStack Mitaka +Onos Goldeneye +OpenVirtualSwitch 2.5.90 +OpenDayLight Beryllium + +Rationale for decisions +----------------------- +Pass + +Conclusions and recommendations +------------------------------- +Tests were successfully executed and metrics collected. +No SLA was verified. To be decided on in next release of OPNFV. diff --git a/docs/release/results/os-onos-sfc-ha.rst b/docs/release/results/os-onos-sfc-ha.rst new file mode 100644 index 000000000..e52ae3d55 --- /dev/null +++ b/docs/release/results/os-onos-sfc-ha.rst @@ -0,0 +1,517 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 + + +=============================== +Test Results for os-onos-sfc-ha +=============================== + +.. toctree:: + :maxdepth: 2 + + +fuel +==== + +.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main +.. _POD2: https://wiki.opnfv.org/pharos?&#community_test_labs + +Overview of test results +------------------------ + +See Grafana_ for viewing test result metrics for each respective test case. It +is possible to chose which specific scenarios to look at, and then to zoom in +on the details of each run test scenario as well. + +All of the test case results below are based on 4 scenario test runs, each run +on the Ericsson POD2_ or LF POD2_ between September 5 and 10 in 2016. + +TC002 +----- +The round-trip-time (RTT) between 2 VMs on different blades is measured using +ping. Most test run measurements result on average between 0.5 and 0.6 ms. +A few runs start with a 1 - 1.5 ms RTT spike (This could be because of normal ARP +handling). One test run has a greater RTT spike of 1.9 ms, which is the same +one with the 0.7 ms average. The other runs have no similar spike at all. +To be able to draw conclusions more runs should be made. +SLA set to 10 ms. The SLA value is used as a reference, it has not +been defined by OPNFV. + +TC005 +----- +The IO read bandwidth looks similar between different dates, with an +average between approx. 170 and 200 MB/s. Within each test run the results +vary, with a minimum 2 MB/s and maximum 838 MB/s on the totality. Most runs +have a minimum BW of 3 MB/s (two runs at 2 MB/s). The maximum BW varies more in +absolute numbers between the dates, between 617 and 838 MB/s. +SLA set to 400 MB/s. The SLA value is used as a reference, it has not been +defined by OPNFV. + +TC010 +----- +The measurements for memory latency are similar between test dates and result +in approx. 1.2 ns. The variations within each test run are similar, between +1.215 and 1.219 ns. One exception is February 16, where the average is 1.222 +and varies between 1.22 and 1.28 ns. +SLA set to 30 ns. The SLA value is used as a reference, it has not been defined +by OPNFV. + +TC011 +----- +Packet delay variation between 2 VMs on different blades is measured using +Iperf3. On the first date the reported packet delay variation varies between +0.0025 and 0.011 ms, with an average delay variation of 0.0067 ms. +On the second date the delay variation varies between 0.002 and 0.006 ms, with +an average delay variation of 0.004 ms. + +TC012 +----- +Between test dates, the average measurements for memory bandwidth vary between +17.4 and 17.9 GB/s. Within each test run the results vary more, with a minimal +BW of 16.4 GB/s and maximum of 18.2 GB/s on the totality. +SLA set to 15 GB/s. The SLA value is used as a reference, it has not been +defined by OPNFV. + +TC014 +----- +The Unixbench processor test run results vary between scores 3080 and 3240, +one result each date. The average score on the total is 3150. +No SLA set. + +TC037 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The RTT results are similar throughout the different test dates and runs at +approx. 15 ms. Some test runs show an increase with many flows, in the range +towards 16 to 17 ms. One exception standing out is Feb. 15 where the average +RTT is stable at approx. 13 ms. The PPS results are not as consistent as the +RTT results. +In some test runs when running with less than approx. 10000 flows the PPS +throughput is normally flatter compared to when running with more flows, after +which the PPS throughput decreases. Around 20 percent decrease in the worst +case. For the other test runs there is however no significant change to the PPS +throughput when the number of flows are increased. In some test runs the PPS +is also greater with 1000000 flows compared to other test runs where the PPS +result is less with only 2 flows. + +The average PPS throughput in the different runs varies between 414000 and +452000 PPS. The total amount of packets in each test run is approx. 7500000 to +8200000 packets. One test run Feb. 15 sticks out with a PPS average of +558000 and approx. 1100000 packets in total (same as the on mentioned earlier +for RTT results). + +There are lost packets reported in most of the test runs. There is no observed +correlation between the amount of flows and the amount of lost packets. +The lost amount of packets normally range between 100 and 1000 per test run, +but there are spikes in the range of 10000 lost packets as well, and even +more in a rare cases. + +CPU utilization statistics are collected during UDP flows sent between the VMs +using pktgen as packet generator tool. The average measurements for CPU +utilization ratio vary between 1% to 2%. The peak of CPU utilization ratio +appears around 7%. + +TC069 +----- +Between test dates, the average measurements for memory bandwidth vary between +15.5 and 25.4 GB/s. Within each test run the results vary more, with a minimal +BW of 9.7 GB/s and maximum of 29.5 GB/s on the totality. +SLA set to 6 GB/s. The SLA value is used as a reference, it has not been +defined by OPNFV. + +TC070 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The RTT results are similar throughout the different test dates and runs at +approx. 15 ms. Some test runs show an increase with many flows, in the range +towards 16 to 17 ms. One exception standing out is Feb. 15 where the average +RTT is stable at approx. 13 ms. The PPS results are not as consistent as the +RTT results. +In some test runs when running with less than approx. 10000 flows the PPS +throughput is normally flatter compared to when running with more flows, after +which the PPS throughput decreases. Around 20 percent decrease in the worst +case. For the other test runs there is however no significant change to the PPS +throughput when the number of flows are increased. In some test runs the PPS +is also greater with 1000000 flows compared to other test runs where the PPS +result is less with only 2 flows. + +The average PPS throughput in the different runs varies between 414000 and +452000 PPS. The total amount of packets in each test run is approx. 7500000 to +8200000 packets. One test run Feb. 15 sticks out with a PPS average of +558000 and approx. 1100000 packets in total (same as the on mentioned earlier +for RTT results). + +There are lost packets reported in most of the test runs. There is no observed +correlation between the amount of flows and the amount of lost packets. +The lost amount of packets normally range between 100 and 1000 per test run, +but there are spikes in the range of 10000 lost packets as well, and even +more in a rare cases. + +Memory utilization statistics are collected during UDP flows sent between the +VMs using pktgen as packet generator tool. The average measurements for memory +utilization vary between 225MB to 246MB. The peak of memory utilization appears +around 340MB. + +TC071 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The RTT results are similar throughout the different test dates and runs at +approx. 15 ms. Some test runs show an increase with many flows, in the range +towards 16 to 17 ms. One exception standing out is Feb. 15 where the average +RTT is stable at approx. 13 ms. The PPS results are not as consistent as the +RTT results. +In some test runs when running with less than approx. 10000 flows the PPS +throughput is normally flatter compared to when running with more flows, after +which the PPS throughput decreases. Around 20 percent decrease in the worst +case. For the other test runs there is however no significant change to the PPS +throughput when the number of flows are increased. In some test runs the PPS +is also greater with 1000000 flows compared to other test runs where the PPS +result is less with only 2 flows. + +The average PPS throughput in the different runs varies between 414000 and +452000 PPS. The total amount of packets in each test run is approx. 7500000 to +8200000 packets. One test run Feb. 15 sticks out with a PPS average of +558000 and approx. 1100000 packets in total (same as the on mentioned earlier +for RTT results). + +There are lost packets reported in most of the test runs. There is no observed +correlation between the amount of flows and the amount of lost packets. +The lost amount of packets normally range between 100 and 1000 per test run, +but there are spikes in the range of 10000 lost packets as well, and even +more in a rare cases. + +Cache utilization statistics are collected during UDP flows sent between the +VMs using pktgen as packet generator tool. The average measurements for cache +utilization vary between 205MB to 212MB. + +TC072 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The RTT results are similar throughout the different test dates and runs at +approx. 15 ms. Some test runs show an increase with many flows, in the range +towards 16 to 17 ms. One exception standing out is Feb. 15 where the average +RTT is stable at approx. 13 ms. The PPS results are not as consistent as the +RTT results. +In some test runs when running with less than approx. 10000 flows the PPS +throughput is normally flatter compared to when running with more flows, after +which the PPS throughput decreases. Around 20 percent decrease in the worst +case. For the other test runs there is however no significant change to the PPS +throughput when the number of flows are increased. In some test runs the PPS +is also greater with 1000000 flows compared to other test runs where the PPS +result is less with only 2 flows. + +The average PPS throughput in the different runs varies between 414000 and +452000 PPS. The total amount of packets in each test run is approx. 7500000 to +8200000 packets. One test run Feb. 15 sticks out with a PPS average of +558000 and approx. 1100000 packets in total (same as the on mentioned earlier +for RTT results). + +There are lost packets reported in most of the test runs. There is no observed +correlation between the amount of flows and the amount of lost packets. +The lost amount of packets normally range between 100 and 1000 per test run, +but there are spikes in the range of 10000 lost packets as well, and even +more in a rare cases. + +Network utilization statistics are collected during UDP flows sent between the +VMs using pktgen as packet generator tool. Total number of packets received per +second was average on 200 kpps and total number of packets transmitted per +second was average on 600 kpps. + +Detailed test results +--------------------- +The scenario was run on Ericsson POD2_ and LF POD2_ with: +Fuel 9.0 +OpenStack Mitaka +Onos Goldeneye +OpenVirtualSwitch 2.5.90 +OpenDayLight Beryllium + +Rationale for decisions +----------------------- +Pass + +Tests were successfully executed and metrics collected. +No SLA was verified. To be decided on in next release of OPNFV. + +Conclusions and recommendations +------------------------------- +The pktgen test configuration has a relatively large base effect on RTT in +TC037 compared to TC002, where there is no background load at all. Approx. +15 ms compared to approx. 0.5 ms, which is more than a 3000 percentage +difference in RTT results. +Especially RTT and throughput come out with better results than for instance +the *fuel-os-nosdn-nofeature-ha* scenario does. The reason for this should +probably be further analyzed and understood. Also of interest could be +to make further analyzes to find patterns and reasons for lost traffic. +Also of interest could be to see if there are continuous variations where +some test cases stand out with better or worse results than the general test +case. + + +Joid +===== + +.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main +.. _POD6: https://wiki.opnfv.org/pharos?&#community_test_labs + +Overview of test results +------------------------ + +See Grafana_ for viewing test result metrics for each respective test case. It +is possible to chose which specific scenarios to look at, and then to zoom in +on the details of each run test scenario as well. + +All of the test case results below are based on 4 scenario test runs, each run +on the Intel POD6_ between September 8 and 11 in 2016. + +TC002 +----- +The round-trip-time (RTT) between 2 VMs on different blades is measured using +ping. Most test run measurements result on average between 1.35 ms and 1.57 ms. +Only one test run has reached greatest RTT spike of 2.58 ms. Meanwhile, the +smallest network latency is 1.11 ms, which is obtained on Sep. 11st. In +general, the average of network latency of the four test runs are between 1.35 +ms and 1.57 ms. SLA set to be 10 ms. The SLA value is used as a reference, it +has not been defined by OPNFV. + +TC005 +----- +The IO read bandwidth actually refers to the storage throughput, which is +measured by fio and the greatest IO read bandwidth of the four runs is 175.4 +MB/s. The IO read bandwidth of the three runs looks similar, with an average +between 43.7 and 56.3 MB/s, except one on Sep. 8, for its maximum storage +throughput is only 107.9 MB/s. One of the runs has a minimum BW of 478 KM/s and +other has a maximum BW of 168.6 MB/s. The SLA of read bandwidth sets to be +400 MB/s, which is used as a reference, and it has not been defined by OPNFV. + +The results of storage IOPS for the four runs look similar with each other. The +IO read times per second of the four test runs have an average value between +978 per second and 1.20 K/s, and meanwhile, the minimum result is only 36 times +per second. + +TC010 +----- +The tool we use to measure memory read latency is lmbench, which is a series of +micro benchmarks intended to measure basic operating system and hardware system +metrics. The memory read latency of the four runs is between 1.164 ns and 1.244 +ns on average. The variations within each test run are quite different, some +vary from a large range and others have a small change. For example, the +largest change is on September 10, the memory read latency of which is ranging +from 1.128 ns to 1.381 ns. However, the results on September 11 change very +little. The SLA sets to be 30 ns. The SLA value is used as a reference, it has +not been defined by OPNFV. + +TC011 +----- +Iperf3 is a tool for evaluating the packet delay variation between 2 VMs on +different blades. The reported packet delay variations of the four test runs +differ from each other. In general, the packet delay of two runs look similar, +for they both stay stable within each run. And the mean packet delay of them +are 0.0772 ms and 0.0788 ms respectively. Of the four runs, the fourth has the +worst result, because the packet delay reaches 0.0838 ms. The rest one has a +large wide range from 0.0666 ms to 0.0798 ms. The SLA value sets to be 10 ms. +The SLA value is used as a reference, it has not been defined by OPNFV. + +TC012 +----- +Lmbench is also used to measure the memory read and write bandwidth, in which +we use bw_mem to obtain the results. Among the four test runs, the trend of the +memory bandwidth almost look similar, which all have a large wide range, and +the minimum and maximum results are 9.02 GB/s and 18.14 GB/s. Here SLA set to +be 15 GB/s. The SLA value is used as a reference, it has not been defined by +OPNFV. + +TC014 +----- +The Unixbench is used to evaluate the IaaS processing speed with regards to +score of single cpu running and parallel running. It can be seen from the +dashboard that the processing test results vary from scores 3395 to 3475, and +there is only one result one date. No SLA set. + +TC037 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The mean packet throughput of the four test runs is between 362.1 kpps and +363.5 kpps, of which the result of the third run is the highest. The RTT +results of all the test runs keep flat at approx. 17 ms. It is obvious that the +PPS results are not as consistent as the RTT results. + +The No. flows of the four test runs are 240 k on average and the PPS results +look a little waved since the largest packet throughput is 418.1 kpps and the +minimum throughput is 326.5 kpps respectively. + +There are no errors of packets received in the four runs, but there are still +lost packets in all the test runs. The RTT values obtained by ping of the four +runs have the similar average vaue, that is approx. 17 ms, of which the worst +RTT is 39 ms on Sep. 11st. + +CPU load is measured by mpstat, and CPU load of the four test runs seem a +little similar, since the minimum value and the peak of CPU load is between 0 +percent and nine percent respectively. And the best result is obtained on Sep. +10, with an CPU load of nine percent. + +TC069 +----- +With the block size changing from 1 kb to 512 kb, the memory write bandwidth +tends to become larger first and then smaller within every run test, which +rangs from 25.9 GB/s to 26.6 GB/s and then to 18.1 GB/s on average. Since the +test id is one, it is that only the INT memory write bandwidth is tested. On +the whole, when the block size is from 2 kb to 16 kb, the memory write +bandwidth look similar with a minimal BW of 22.1 GB/s and peak value of 28.6 +GB/s. And then with the block size becoming larger, the memory write bandwidth +tends to decrease. SLA sets to be 7 GB/s. The SLA value is used as a reference, +it has not been defined by OPNFV. + +TC070 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The network latency is measured by ping, and the results of the four test runs +look similar with each other, and within these test runs, the maximum RTT can +reach 39 ms and the average RTT is usually approx. 17 ms. The network latency +tested on Sep. 11 shows that it has a peak latency of 39 ms. But on the whole, +the average RTTs of the five runs keep flat and the network latency is +relatively short. + +Memory utilization is measured by free, which can display amount of free and +used memory in the system. The largest amount of used memory is 270 MiB on the +first two runs. In general, the mean used memory of two test runs have very +large memory utilization, which can reach 264 MiB on average. And the other two +runs have a large wide range of memory usage with the minimum value of 150 MiB +and the maximum value of 270 MiB. On the other hand, for the mean free memory, +the four test runs have the similar trend with that of the mean used memory. +In general, the mean free memory change from 220 MiB to 342 MiB. + +Packet throughput and packet loss can be measured by pktgen, which is a tool +in the network for generating traffic loads for network experiments. The mean +packet throughput of the four test runs seem quite different, ranging from +326.5 kpps to 418.1 kpps. The average number of flows in these tests is +240000, and each run has a minimum number of flows of 2 and a maximum number +of flows of 1.001 Mil. At the same time, the corresponding packet throughput +differ between 326.5 kpps and 418.1 kpps with an average packet throughput between +361.7 kpps and 363.5 kpps. In summary, the PPS results seem consistent. Within each +test run of the four runs, when number of flows becomes larger, the packet +throughput seems not larger at the same time. + +TC071 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The network latency is measured by ping, and the results of the four test runs +look similar with each other. Within each test run, the maximum RTT is only 47 +ms and the average RTT is usually approx. 15 ms. On the whole, the average +RTTs of the four runs keep stable and the network latency is relatively small. + +Cache utilization is measured by cachestat, which can display size of cache and +buffer in the system. Cache utilization statistics are collected during UDP +flows sent between the VMs using pktgen as packet generator tool. The largest +cache size is 214 MiB, which is same for the four runs, and the smallest cache +size is 94 MiB. On the whole, the average cache size of the four runs look the +same and is between 198 MiB and 207 MiB. Meanwhile, the tread of the buffer +size keep flat, since they have a minimum value of 7 MiB and a maximum value of +8 MiB, with an average value of about 7.9 MiB. + +Packet throughput can be measured by pktgen, which is a tool in the network for +generating traffic loads for network experiments. The mean packet throughput of +the four test runs seem quite the same, which is approx. 363 kpps. The average +number of flows in these tests is 240k, and each run has a minimum number of +flows of 2 and a maximum number of flows of 1.001 Mil. At the same time, the +corresponding packet throughput differ between 327 kpps and 418 kpps with an +average packet throughput of about 363 kpps. Within each test run of the four +runs, when number of flows becomes larger, the packet throughput seems not +larger in the meantime. + +TC072 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The RTT results are similar throughout the different test dates and runs +between 0 ms and 47 ms with an average leatency of less than 16 ms. The PPS +results are not as consistent as the RTT results, for the mean packet +throughput of the four runs differ from 361.7 kpps to 365.0 kpps. + +Network utilization is measured by sar, that is system activity reporter, which +can display the average statistics for the time since the system was started. +Network utilization statistics are collected during UDP flows sent between the +VMs using pktgen as packet generator tool. The largest total number of packets +transmitted per second look similar for two test runs, whose values change a +lot from 10 pps to 432 kpps. While results of the other test runs seem the same +and keep stable with the average number of packets transmitted per second of 10 +pps. However, the total number of packets received per second of the four runs +look similar, which have a large wide range of 2 pps to 657 kpps. + +In some test runs when running with less than approx. 250000 flows the PPS +throughput is normally flatter compared to when running with more flows, after +which the PPS throughput decreases. For the other test runs there is however no +significant change to the PPS throughput when the number of flows are +increased. In some test runs the PPS is also greater with 250000 flows +compared to other test runs where the PPS result is less with only 2 flows. + +There are lost packets reported in most of the test runs. There is no observed +correlation between the amount of flows and the amount of lost packets. +The lost amount of packets normally differs a lot per test run. + +Detailed test results +--------------------- +The scenario was run on Intel POD6_ with: +Joid +OpenStack Mitaka +Onos Goldeneye +OpenVirtualSwitch 2.5.90 +OpenDayLight Beryllium + +Rationale for decisions +----------------------- +Pass + +Conclusions and recommendations +------------------------------- +Tests were successfully executed and metrics collected. +No SLA was verified. To be decided on in next release of OPNFV. + diff --git a/docs/release/results/overview.rst b/docs/release/results/overview.rst new file mode 100644 index 000000000..b4a050545 --- /dev/null +++ b/docs/release/results/overview.rst @@ -0,0 +1,106 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Ericsson AB and others. + +Yardstick test tesult document overview +======================================= + +.. _`Yardstick user guide`: artifacts.opnfv.org/yardstick/docs/userguide/index.html +.. _Dashboard: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main +.. _Jenkins: https://build.opnfv.org/ci/view/yardstick/ +.. _Scenarios: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-scenarios + +This document provides an overview of the results of test cases developed by +the OPNFV Yardstick Project, executed on OPNFV community labs. + +Yardstick project is described in `Yardstick user guide`_. + +Yardstick is run systematically at the end of an OPNFV fresh installation. +The system under test (SUT) is installed by the installer Apex, Compass, Fuel +or Joid on Performance Optimized Datacenter (POD); One single installer per +POD. All the runnable test cases are run sequentially. The installer and the +POD are considered to evaluate whether the test case can be run or not. That is +why all the number of test cases may vary from 1 installer to another and from +1 POD to POD. + +OPNFV CI provides automated build, deploy and testing for +the software developed in OPNFV. Unless stated, the reported tests are +automated via Jenkins Jobs. Yardsrick test results from OPNFV Continous +Integration can be found in the following dashboard: + +* *Yardstick* Dashboard_: uses influx DB to store Yardstick CI test results and + Grafana for visualization (user: opnfv/ password: opnfv) + +The results of executed test cases are available in Dashboard_ and all logs are +stored in Jenkins_. + +It was not possible to execute the entire Yardstick test cases suite on the +PODs assigned for release verification over a longer period of time, due to +continuous work on the software components and blocking faults either on +environment, features or test framework. + +The list of scenarios supported by each installer can be described as follows: + ++-------------------------+---------+---------+---------+---------+ +| Scenario | Apex | Compass | Fuel | Joid | ++=========================+=========+=========+=========+=========+ +| os-nosdn-nofeature-noha | | | X | X | ++-------------------------+---------+---------+---------+---------+ +| os-nosdn-nofeature-ha | X | X | X | X | ++-------------------------+---------+---------+---------+---------+ +| os-odl_l2-nofeature-ha | X | X | X | X | ++-------------------------+---------+---------+---------+---------+ +| os-odl_l2-nofeature-noha| | | X | | ++-------------------------+---------+---------+---------+---------+ +| os-odl_l3-nofeature-ha | X | X | X | | ++-------------------------+---------+---------+---------+---------+ +| os-odl_l3-nofeature-noha| | | X | | ++-------------------------+---------+---------+---------+---------+ +| os-onos-sfc-ha | X | X | X | X | ++-------------------------+---------+---------+---------+---------+ +| os-onos-sfc-noha | | | X | | ++-------------------------+---------+---------+---------+---------+ +| os-onos-nofeature-ha | X | X | X | X | ++-------------------------+---------+---------+---------+---------+ +| os-onos-nofeature-noha | | | X | | ++-------------------------+---------+---------+---------+---------+ +| os-odl_l2-sfc-ha | | | X | | ++-------------------------+---------+---------+---------+---------+ +| os-odl_l2-sfc-noha | X | X | X | | ++-------------------------+---------+---------+---------+---------+ +| os-odl_l2-bgpvpn-ha | X | | X | | ++-------------------------+---------+---------+---------+---------+ +| os-odl_l2-bgpvpn-noha | | X | X | | ++-------------------------+---------+---------+---------+---------+ +| os-nosdn-kvm-ha | | | X | | ++-------------------------+---------+---------+---------+---------+ +| os-nosdn-kvm-noha | | X | X | | ++-------------------------+---------+---------+---------+---------+ +| os-nosdn-ovs-ha | | | X | | ++-------------------------+---------+---------+---------+---------+ +| os-nosdn-ovs-noha | X | | X | | ++-------------------------+---------+---------+---------+---------+ +| os-ocl-nofeature-ha | | | | | ++-------------------------+---------+---------+---------+---------+ +| os-nosdn-lxd-ha | | | | X | ++-------------------------+---------+---------+---------+---------+ +| os-nosdn-lxd-noha | | | | X | ++-------------------------+---------+---------+---------+---------+ +| os-odl_l2-fdio-noha | X | | | | ++-------------------------+---------+---------+---------+---------+ +| os-odl_l2-moon-ha | | X | | | ++-------------------------+---------+---------+---------+---------+ + +To qualify for release, the scenarios must have deployed and been successfully +tested in four consecutive installations to establish stability of deployment +and feature capability. It is a recommendation to run Yardstick test +cases over a longer period of time in order to better understand the behavior +of the system under test. + +References +---------- + +* IEEE Std 829-2008. "Standard for Software and System Test Documentation". + +* OPNFV Colorado release note for Yardstick. diff --git a/docs/release/results/results.rst b/docs/release/results/results.rst new file mode 100644 index 000000000..04c6b9f87 --- /dev/null +++ b/docs/release/results/results.rst @@ -0,0 +1,57 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 + +Results listed by scenario +========================== + +The following sections describe the yardstick results as evaluated for the +Colorado release scenario validation runs. Each section describes the +determined state of the specific scenario as deployed in the Colorado +release process. + +Scenario Results +================ + +.. _Dashboard: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main +.. _Jenkins: https://build.opnfv.org/ci/view/yardstick/ + +The following documents contain results of Yardstick test cases executed on +OPNFV labs, triggered by OPNFV CI pipeline, documented per scenario. + + +.. toctree:: + :maxdepth: 1 + + os-nosdn-nofeature-ha.rst + os-nosdn-nofeature-noha.rst + os-odl_l2-nofeature-ha.rst + os-odl_l2-bgpvpn-ha.rst + os-odl_l2-sfc-ha.rst + os-nosdn-kvm-ha.rst + os-onos-nofeature-ha.rst + os-onos-sfc-ha.rst + +Test results of executed tests are avilable in Dashboard_ and logs in Jenkins_. + + +Feature Test Results +==================== + +The following features were verified by Yardstick test cases: + + * IPv6 + + * HA (see :doc:`yardstick-opnfv-ha`) + + * KVM + + * Parser + + * Virtual Traffic Classifier (see :doc:`yardstick-opnfv-vtc`) + + * StorPerf + +.. note:: The test cases for IPv6 and Parser Projects are included in the + compass scenario. + diff --git a/docs/release/results/yardstick-opnfv-ha.rst b/docs/release/results/yardstick-opnfv-ha.rst new file mode 100644 index 000000000..ef1617342 --- /dev/null +++ b/docs/release/results/yardstick-opnfv-ha.rst @@ -0,0 +1,118 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 + + +=================================== +Test Results for yardstick-opnfv-ha +=================================== + +.. toctree:: + :maxdepth: 2 + +Details +======= +There are two test cases, TC019 and TC025, for high availability (HA) test of +OPNFV platform, and both test cases were executed in CMCC's lab with 3+2 HA +deployment, where the installer is Arno SR1 release of fuel. + + +TC019 +----- +This test case verifies the high availability of the openstack service, i.e. +"nova-api", on controller node. +There are one attacker, "kill-process" which kills all "nova-api" processes, +and two monitors, "openstack-cmd" monitoring "nova-api" service by openstack +command "nova image-list", while "process" monitor checks whether "nova-api" +process is running. Please see the test case description document for detail. + +Overview of test results +------------------------ +The service_outage_time of "nova image-list" is 0 seconds, while the +process_recover_time of "nova-api" is 300 seconds which equals the running time +of this test case, that means the "nova-api" service can't automatiocally +recover itself. + +Detailed test results +--------------------- +All "nova-api" process on the selected controller node was killed, and results +of two monitors were collected. Specifically, the results of "nova image-list" +request were collected from compute node and the status of "nova-api" process +were collected from the selected controller node. + +Each monitor was running in a single process. The running time of each monitor +was about 300 seconds with no waiting time between twice monitor running. For +"nova image-list", the running times is 127, that's to say there is one +openstack command request every 2.36 seconds; while the running times is 141 +for "nova-api" process checking, the accurancy is about 2.13 seconds. + +The outage time of each monitor, which the name is "service_outage_time" for +"openstack-cmd" monitor and "process_recover_time" for "process" monitor, is +defined as the duration from the begin time of the first failure request to the +end time of the last failure request. + +All "nova image-list" requestes were success, so the service_outage_time of +"nova image-list" is 0 second, while "nova-api" processes were not running for +all "process" checking, so the process_recover_time of "nova-api" is 300s. + +Rationale for decisions +----------------------- +The service_outage_time is 0 second, that means the failover time of openstack +service is less than 2.36s, which is the period of each request. However, the +process_recover_time equals test case runing time, that means the process is +not automatically recovered, so this test case is fail. + + +TC025 +----- +This test case verifies the high availability of controller node. When one of +the controller node abnormally shutdown, the service provided should be OK. +There are one attacker, "kill-process" which kills all "nova-api" processes, +and two "openstack-cmd" monitors, one monitoring openstack command +"nova image-list" and the other monitoring "neutron router-list". +Please see the test case description document for detail. + +Overview of test results +------------------------ +The both service_outage_time of "nova image-list" and "neutron router-list" +were 0 second. + +Detailed test results +--------------------- +A selected controller node was shutdown, and results of two monitors were +collected from compute node. + +The return results of "nova image-list" and "neutron router-list" requests from +compute node were collected, then the failure requestion time were statistic +service_outage_time of corresponding service. + +Each monitor was running in a single process. The running time of each monitor +was about 300 seconds with no waiting time between twice monitor running. For +"nova image-list", the running times is 49, that's to say there is one +openstack command request every 6.12 seconds; while the running times is 28 for +"neutron router-list", the accurancy is about 10.71 seconds. + +The "service_outage_time" for two monitors is defined as the duration from the +begin time of the first failure request to the end time of the last failure +request. + +All "nova image-list" and "neutron router-list" requestes were success, so the +service_outage_time of both two monitor were 0 second. + +Rationale for decisions +----------------------- +As service_outage_time of all monitors are 0 second, that means there are none +failure request in this test case running time, this test case is passed. + + +Conclusions and recommendations +------------------------------- +The TC019 shows the killed process will be not automatically recovered, which +should be imporved. + +There are several improvement points for HA test: +a) Running test cases in different enveriment deployed by different installers, +such as compass4nfv, apex and joid, with different versiones. +b) The period of each request is a little long, it needs more accurate test +method. +c) More test cases with different faults and different monitors are needed. diff --git a/docs/release/results/yardstick-opnfv-kvm.rst b/docs/release/results/yardstick-opnfv-kvm.rst new file mode 100644 index 000000000..ee4c6390b --- /dev/null +++ b/docs/release/results/yardstick-opnfv-kvm.rst @@ -0,0 +1,38 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 + + +==================================== +Test Results for yardstick-opnfv-kvm +==================================== + +.. toctree:: + :maxdepth: 2 + + +Details +======= + +.. after this doc is filled, remove all comments and include the scenario in +.. results.rst by removing the comment on the file name. + + +Overview of test results +------------------------ + +.. general on metrics collected, number of iterations + +Detailed test results +--------------------- + +.. info on lab, installer, scenario + +Rationale for decisions +----------------------- +.. result analysis, pass/fail + +Conclusions and recommendations +------------------------------- + +.. did the expected behavior occured? diff --git a/docs/release/results/yardstick-opnfv-parser.rst b/docs/release/results/yardstick-opnfv-parser.rst new file mode 100644 index 000000000..520d867ef --- /dev/null +++ b/docs/release/results/yardstick-opnfv-parser.rst @@ -0,0 +1,38 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 + + +======================================= +Test Results for yardstick-opnfv-parser +======================================= + +.. toctree:: + :maxdepth: 2 + + +Details +======= + +.. after this doc is filled, remove all comments and include the scenario in +.. results.rst by removing the comment on the file name. + + +Overview of test results +------------------------ + +.. general on metrics collected, number of iterations + +Detailed test results +--------------------- + +.. info on lab, installer, scenario + +Rationale for decisions +----------------------- +.. result analysis, pass/fail + +Conclusions and recommendations +------------------------------- + +.. did the expected behavior occured? diff --git a/docs/release/results/yardstick-opnfv-vtc.rst b/docs/release/results/yardstick-opnfv-vtc.rst new file mode 100644 index 000000000..059b5491f --- /dev/null +++ b/docs/release/results/yardstick-opnfv-vtc.rst @@ -0,0 +1,248 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 + +.. _Dashboard006: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-tc006 +.. _Dashboard007: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-tc007 +.. _Dashboard020: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-tc020 +.. _Dashboard021: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-tc021 +.. _DashboardVTC: http://testresults.opnfv.org/grafana/dashboard/db/vtc-dashboard +==================================== +Test Results for yardstick-opnfv-vtc +==================================== + +.. toctree:: + :maxdepth: 2 + + +Details +======= + +.. after this doc is filled, remove all comments and include the scenario in +.. results.rst by removing the comment on the file name. + + +Overview of test results +------------------------ + +.. general on metrics collected, number of iterations + +The virtual Traffic Classifier (vtc) Scenario supported by Yardstick is used by 4 Test Cases: + +- TC006 +- TC007 +- TC020 +- TC021 + + +* TC006 + +TC006 is the Virtual Traffic Classifier Data Plane Throughput Benchmarking Test. +It collects measures about the end-to-end throughput supported by the +virtual Traffic Classifier (vTC). +Results of the test are shown in the Dashboard006_ +The throughput is expressed as percentage of the available bandwidth on the NIC. + + +* TC007 + +TC007 is the Virtual Traffic Classifier Data Plane Throughput Benchmarking in presence of +noisy neighbors Test. +It collects measures about the end-to-end throughput supported by the +virtual Traffic Classifier when a user-defined number of noisy neighbors is deployed. +Results of the test are shown in the Dashboard007_ +The throughput is expressed as percentage of the available bandwidth on the NIC. + + +* TC020 + +TC020 is the Virtual Traffic Classifier Instantiation Test. +It verifies that a newly instantiated vTC is alive and functional and its instantiation +is correctly supported by the underlying infrastructure. +Results of the test are shown in the Dashboard020_ + + +* TC021 + +TC021 is the Virtual Traffic Classifier Instantiation in presence of noisy neighbors Test. +It verifies that a newly instantiated vTC is alive and functional and its instantiation +is correctly supported by the underlying infrastructure when noisy neighbors are present. +Results of the test are shown in the Dashboard021_ + +* Generic + +In the Generic scenario the Virtual Traffic Classifier is running on a standard Openstack +setup and traffic is being replayed from a neighbor VM. The traffic sent contains +various protocols and applications, and the VTC identifies them and exports the data. +Results of the test are shown in the DashboardVTC. + +Detailed test results +--------------------- + +* TC006 + +The results for TC006 have been obtained using the following test case +configuration: + +- Context: Dummy +- Scenario: vtc_throughput +- Network Techology: SR-IOV +- vTC Flavor: m1.large + + +* TC007 + +The results for TC007 have been obtained using the following test case +configuration: + +- Context: Dummy +- Scenario: vtc_throughput_noisy +- Network Techology: SR-IOV +- vTC Flavor: m1.large +- Number of noisy neighbors: 2 +- Number of cores per neighbor: 2 +- Amount of RAM per neighbor: 1G + + +* TC020 + +The results for TC020 have been obtained using the following test case +configuration: + +The results listed in previous section have been obtained using the following +test case configuration: + +- Context: Dummy +- Scenario: vtc_instantiation_validation +- Network Techology: SR-IOV +- vTC Flavor: m1.large + + +* TC021 + +The results listed in previous section have been obtained using the following +test case configuration: + +- Context: Dummy +- Scenario: vtc_instantiation_validation +- Network Techology: SR-IOV +- vTC Flavor: m1.large +- Number of noisy neighbors: 2 +- Number of cores per neighbor: 2 +- Amount of RAM per neighbor: 1G + + +For all the test cases, the user can specify different values for the parameters. + +* Generic + +The results listed in the previous section have been obtained, using a +standard Openstack setup. +The user can replay his/her own traffic and see the corresponding results. + +Rationale for decisions +----------------------- + +* TC006 + +The result of the test is a number between 0 and 100 which represents the percentage of bandwidth +available on the NIC that corresponds to the supported throughput by the vTC. + + +* TC007 + +The result of the test is a number between 0 and 100 which represents the percentage of bandwidth +available on the NIC that corresponds to the supported throughput by the vTC. + +* TC020 + +The execution of the test is done as described in the following: + +- The vTC is deployed on the OpenStack testbed; +- Some traffic is sent to the vTC; +- The vTC changes the header of the packets and sends them back to the packet generator; +- The packet generator checks that all the packets are received correctly and have been changed +correctly by the vTC. + +The test is declared as PASSED if all the packets are correcly received by the packet generator +and they have been modified by the virtual Traffic Classifier as required. + + +* TC021 + +The execution of the test is done as described in the following: + +- The vTC is deployed on the OpenStack testbed; +- The noisy neighbors are deployed as requested by the user; +- Some traffic is sent to the vTC; +- The vTC change the header of the packets and sends them back to the packet generator; +- The packet generator checks that all the packets are received correctly and have been changed +correctly by the vTC + +The test is declared as PASSED if all the packets are correcly received by the packet generator +and they have been modified by the virtual Traffic Classifier as required. + +* Generic + +The execution of the test consists of the following actions: + +- The vTC is deployed on the OpenStack testbed; +- The traffic generator VM is deployed on the Openstack Testbed; +- Traffic data are relevant to the network setup; +- Traffic is sent to the vTC; + + + +Conclusions and recommendations +------------------------------- + +* TC006 + +The obtained results show that the virtual Traffic Classifier can support up to 4 Gbps +(40% of the available bandwidth) correspond to the expected behaviour of the virtual +Traffic Classifier. +Using the configuration with SR-IOV and large flavor, the expected throughput should +generally be in the range between 3 and 4 Gbps. + + +* TC007 + +These results correspond to the configuration in which the virtual Traffic Classifier uses SR-IOV +Virtual Functions and the flavor is set to large for the virtual machine. +The throughput is in the range between 2.5 Gbps and 3.7 Gbps. +This shows that the effect of 2 noisy neighbors reduces the throughput of +the service between 10 and 20%. +Increasing number of neihbours would have a higher impact on the performance. + + +* TC020 + +The obtained results correspond to the expected behaviour of the virtual Traffic Classifier. +Using the configuration with SR-IOV and large flavor, the expected result is that the vTC is +correctly instantiated, it is able to receive and send packets using SR-IOV technology +and to forward packets back to the packet generator changing the TCP/IP header as required. + + +* TC021 + +The obtained results correspond to the expected behaviour of the virtual Traffic Classifier. +Using the configuration with SR-IOV and large flavor, the expected result is that the vTC is +correctly instantiated, it is able to receive and send packets using SR-IOV technology +and to forward packets back to the packet generator changing the TCP/IP header as required, +also in presence of noisy neighbors. + +* Generic + +The obtained results correspond to the expected behaviour of the virtual Traffic Classifier. +Using the aforementioned configuration the expected application protocols are identified +and their traffic statistics are demonstrated in the DashboardVTC, a group of popular +applications is selected to demonstrate the sound operation of the vTC. +The demonstrated application protocols are: +- HTTP +- Skype +- Bittorrent +- Youtube +- Dropbox +- Twitter +- Viber +- iCloud diff --git a/docs/results/index.rst b/docs/results/index.rst deleted file mode 100644 index 2b67f1b22..000000000 --- a/docs/results/index.rst +++ /dev/null @@ -1,14 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Ericsson AB and others. - -====================== -Yardstick test results -====================== - -.. toctree:: - :maxdepth: 4 - -.. include:: ./overview.rst -.. include:: ./results.rst diff --git a/docs/results/os-nosdn-kvm-ha.rst b/docs/results/os-nosdn-kvm-ha.rst deleted file mode 100644 index a8a56f80e..000000000 --- a/docs/results/os-nosdn-kvm-ha.rst +++ /dev/null @@ -1,270 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 - - -================================ -Test Results for os-nosdn-kvm-ha -================================ - -.. toctree:: - :maxdepth: 2 - - -fuel -==== - -.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main -.. _POD2: https://wiki.opnfv.org/pharos?&#community_test_labs - -Overview of test results ------------------------- - -See Grafana_ for viewing test result metrics for each respective test case. It -is possible to chose which specific scenarios to look at, and then to zoom in -on the details of each run test scenario as well. - -All of the test case results below are based on 4 scenario test -runs, each run on the Ericsson POD2_ or LF POD2_ between August 24 and 30 in -2016. - -TC002 ------ -The round-trip-time (RTT) between 2 VMs on different blades is measured using -ping. Most test run measurements result on average between 0.44 and 0.75 ms. -A few runs start with a 0.65 - 0.68 ms RTT spike (This could be because of -normal ARP handling). One test run has a greater RTT spike of 1.49 ms. -To be able to draw conclusions more runs should be made. SLA set to 10 ms. -The SLA value is used as a reference, it has not been defined by OPNFV. - -TC005 ------ -The IO read bandwidth looks similar between different dates, with an -average between approx. 92 and 204 MB/s. Within each test run the results -vary, with a minimum 2 MB/s and maximum 819 MB/s on the totality. Most runs -have a minimum BW of 3 MB/s (one run at 2 MB/s). The maximum BW varies more in -absolute numbers between the dates, between 238 and 819 MB/s. -SLA set to 400 MB/s. The SLA value is used as a reference, it has not been -defined by OPNFV. - -TC010 ------ -The measurements for memory latency are similar between test dates and result -in approx. 2.07 ns. The variations within each test run are similar, between -1.41 and 3.53 ns. -SLA set to 30 ns. The SLA value is used as a reference, it has not been defined -by OPNFV. - -TC011 ------ -Packet delay variation between 2 VMs on different blades is measured using -Iperf3. The reported packet delay variation varies between 0.0051 and 0.0243 ms, -with an average delay variation between 0.0081 ms and 0.0195 ms. - -TC012 ------ -Between test dates, the average measurements for memory bandwidth result in -approx. 13.6 GB/s. Within each test run the results vary more, with a minimal -BW of 6.09 GB/s and maximum of 16.47 GB/s on the totality. -SLA set to 15 GB/s. The SLA value is used as a reference, it has not been -defined by OPNFV. - -TC014 ------ -The Unixbench processor test run results vary between scores 2316 and 3619, -one result each date. -No SLA set. - -TC037 ------ -The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs -on different blades are measured when increasing the amount of UDP flows sent -between the VMs using pktgen as packet generator tool. - -Round trip times and packet throughput between VMs can typically be affected by -the amount of flows set up and result in higher RTT and less PPS throughput. - -The RTT results are similar throughout the different test dates and runs at -approx. 15 ms. Some test runs show an increase with many flows, in the range -towards 16 to 17 ms. One exception standing out is Feb. 15 where the average -RTT is stable at approx. 13 ms. The PPS results are not as consistent as the -RTT results. -In some test runs when running with less than approx. 10000 flows the PPS -throughput is normally flatter compared to when running with more flows, after -which the PPS throughput decreases. Around 20 percent decrease in the worst -case. For the other test runs there is however no significant change to the PPS -throughput when the number of flows are increased. In some test runs the PPS -is also greater with 1000000 flows compared to other test runs where the PPS -result is less with only 2 flows. - -The average PPS throughput in the different runs varies between 414000 and -452000 PPS. The total amount of packets in each test run is approx. 7500000 to -8200000 packets. One test run Feb. 15 sticks out with a PPS average of -558000 and approx. 1100000 packets in total (same as the on mentioned earlier -for RTT results). - -There are lost packets reported in most of the test runs. There is no observed -correlation between the amount of flows and the amount of lost packets. -The lost amount of packets normally range between 100 and 1000 per test run, -but there are spikes in the range of 10000 lost packets as well, and even -more in a rare cases. - -CPU utilization statistics are collected during UDP flows sent between the VMs -using pktgen as packet generator tool. The average measurements for CPU -utilization ratio vary between 1% to 2%. The peak of CPU utilization ratio -appears around 7%. - -TC069 ------ -Between test dates, the average measurements for memory bandwidth vary between -22.6 and 29.1 GB/s. Within each test run the results vary more, with a minimal -BW of 20.0 GB/s and maximum of 29.5 GB/s on the totality. -SLA set to 6 GB/s. The SLA value is used as a reference, it has not been -defined by OPNFV. - - -TC070 ------ -The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs -on different blades are measured when increasing the amount of UDP flows sent -between the VMs using pktgen as packet generator tool. - -Round trip times and packet throughput between VMs can typically be affected by -the amount of flows set up and result in higher RTT and less PPS throughput. - -The RTT results are similar throughout the different test dates and runs at -approx. 15 ms. Some test runs show an increase with many flows, in the range -towards 16 to 17 ms. One exception standing out is Feb. 15 where the average -RTT is stable at approx. 13 ms. The PPS results are not as consistent as the -RTT results. -In some test runs when running with less than approx. 10000 flows the PPS -throughput is normally flatter compared to when running with more flows, after -which the PPS throughput decreases. Around 20 percent decrease in the worst -case. For the other test runs there is however no significant change to the PPS -throughput when the number of flows are increased. In some test runs the PPS -is also greater with 1000000 flows compared to other test runs where the PPS -result is less with only 2 flows. - -The average PPS throughput in the different runs varies between 414000 and -452000 PPS. The total amount of packets in each test run is approx. 7500000 to -8200000 packets. One test run Feb. 15 sticks out with a PPS average of -558000 and approx. 1100000 packets in total (same as the on mentioned earlier -for RTT results). - -There are lost packets reported in most of the test runs. There is no observed -correlation between the amount of flows and the amount of lost packets. -The lost amount of packets normally range between 100 and 1000 per test run, -but there are spikes in the range of 10000 lost packets as well, and even -more in a rare cases. - -Memory utilization statistics are collected during UDP flows sent between the -VMs using pktgen as packet generator tool. The average measurements for memory -utilization vary between 225MB to 246MB. The peak of memory utilization appears -around 340MB. - -TC071 ------ -The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs -on different blades are measured when increasing the amount of UDP flows sent -between the VMs using pktgen as packet generator tool. - -Round trip times and packet throughput between VMs can typically be affected by -the amount of flows set up and result in higher RTT and less PPS throughput. - -The RTT results are similar throughout the different test dates and runs at -approx. 15 ms. Some test runs show an increase with many flows, in the range -towards 16 to 17 ms. One exception standing out is Feb. 15 where the average -RTT is stable at approx. 13 ms. The PPS results are not as consistent as the -RTT results. -In some test runs when running with less than approx. 10000 flows the PPS -throughput is normally flatter compared to when running with more flows, after -which the PPS throughput decreases. Around 20 percent decrease in the worst -case. For the other test runs there is however no significant change to the PPS -throughput when the number of flows are increased. In some test runs the PPS -is also greater with 1000000 flows compared to other test runs where the PPS -result is less with only 2 flows. - -The average PPS throughput in the different runs varies between 414000 and -452000 PPS. The total amount of packets in each test run is approx. 7500000 to -8200000 packets. One test run Feb. 15 sticks out with a PPS average of -558000 and approx. 1100000 packets in total (same as the on mentioned earlier -for RTT results). - -There are lost packets reported in most of the test runs. There is no observed -correlation between the amount of flows and the amount of lost packets. -The lost amount of packets normally range between 100 and 1000 per test run, -but there are spikes in the range of 10000 lost packets as well, and even -more in a rare cases. - -Cache utilization statistics are collected during UDP flows sent between the -VMs using pktgen as packet generator tool. The average measurements for cache -utilization vary between 205MB to 212MB. - -TC072 ------ -The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs -on different blades are measured when increasing the amount of UDP flows sent -between the VMs using pktgen as packet generator tool. - -Round trip times and packet throughput between VMs can typically be affected by -the amount of flows set up and result in higher RTT and less PPS throughput. - -The RTT results are similar throughout the different test dates and runs at -approx. 15 ms. Some test runs show an increase with many flows, in the range -towards 16 to 17 ms. One exception standing out is Feb. 15 where the average -RTT is stable at approx. 13 ms. The PPS results are not as consistent as the -RTT results. -In some test runs when running with less than approx. 10000 flows the PPS -throughput is normally flatter compared to when running with more flows, after -which the PPS throughput decreases. Around 20 percent decrease in the worst -case. For the other test runs there is however no significant change to the PPS -throughput when the number of flows are increased. In some test runs the PPS -is also greater with 1000000 flows compared to other test runs where the PPS -result is less with only 2 flows. - -The average PPS throughput in the different runs varies between 414000 and -452000 PPS. The total amount of packets in each test run is approx. 7500000 to -8200000 packets. One test run Feb. 15 sticks out with a PPS average of -558000 and approx. 1100000 packets in total (same as the on mentioned earlier -for RTT results). - -There are lost packets reported in most of the test runs. There is no observed -correlation between the amount of flows and the amount of lost packets. -The lost amount of packets normally range between 100 and 1000 per test run, -but there are spikes in the range of 10000 lost packets as well, and even -more in a rare cases. - -Network utilization statistics are collected during UDP flows sent between the -VMs using pktgen as packet generator tool. Total number of packets received per -second was average on 200 kpps and total number of packets transmitted per -second was average on 600 kpps. - -Detailed test results ---------------------- -The scenario was run on Ericsson POD2_ and LF POD2_ with: -Fuel 9.0 -OpenStack Mitaka -OpenVirtualSwitch 2.5.90 -OpenDayLight Beryllium - -Rationale for decisions ------------------------ -Pass - -Tests were successfully executed and metrics collected. -No SLA was verified. To be decided on in next release of OPNFV. - -Conclusions and recommendations -------------------------------- -The pktgen test configuration has a relatively large base effect on RTT in -TC037 compared to TC002, where there is no background load at all. Approx. -15 ms compared to approx. 0.5 ms, which is more than a 3000 percentage -difference in RTT results. -Especially RTT and throughput come out with better results than for instance -the *fuel-os-nosdn-nofeature-ha* scenario does. The reason for this should -probably be further analyzed and understood. Also of interest could be -to make further analyzes to find patterns and reasons for lost traffic. -Also of interest could be to see if there are continuous variations where -some test cases stand out with better or worse results than the general test -case. - diff --git a/docs/results/os-nosdn-nofeature-ha.rst b/docs/results/os-nosdn-nofeature-ha.rst deleted file mode 100644 index 9e52731d5..000000000 --- a/docs/results/os-nosdn-nofeature-ha.rst +++ /dev/null @@ -1,492 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 - - -====================================== -Test Results for os-nosdn-nofeature-ha -====================================== - -.. toctree:: - :maxdepth: 2 - - -apex -==== - -.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main -.. _POD1: https://wiki.opnfv.org/pharos?&#community_test_labs - - -Overview of test results ------------------------- - -See Grafana_ for viewing test result metrics for each respective test case. It -is possible to chose which specific scenarios to look at, and then to zoom in -on the details of each run test scenario as well. - -All of the test case results below are based on 4 scenario test -runs, each run on the LF POD1_ between August 25 and 28 in -2016. - -TC002 ------ -The round-trip-time (RTT) between 2 VMs on different blades is measured using -ping. Most test run measurements result on average between 0.74 and 1.08 ms. -A few runs start with a 0.99 - 1.07 ms RTT spike (This could be because of -normal ARP handling). One test run has a greater RTT spike of 1.35 ms. -To be able to draw conclusions more runs should be made. SLA set to 10 ms. -The SLA value is used as a reference, it has not been defined by OPNFV. - -TC005 ------ -The IO read bandwidth looks similar between different dates, with an -average between approx. 128 and 136 MB/s. Within each test run the results -vary, with a minimum 5 MB/s and maximum 446 MB/s on the totality. Most runs -have a minimum BW of 5 MB/s (one run at 6 MB/s). The maximum BW varies more in -absolute numbers between the dates, between 416 and 446 MB/s. -SLA set to 400 MB/s. The SLA value is used as a reference, it has not been -defined by OPNFV. - -TC010 ------ -The measurements for memory latency are similar between test dates and result -in approx. 1.09 ns. The variations within each test run are similar, between -1.0860 and 1.0880 ns. -SLA set to 30 ns. The SLA value is used as a reference, it has not been defined -by OPNFV. - -TC011 ------ -Packet delay variation between 2 VMs on different blades is measured using -Iperf3. The reported packet delay variation varies between 0.0025 and 0.0148 ms, -with an average delay variation between 0.0056 ms and 0.0157 ms. - -TC012 ------ -Between test dates, the average measurements for memory bandwidth result in -approx. 19.70 GB/s. Within each test run the results vary more, with a minimal -BW of 18.16 GB/s and maximum of 20.13 GB/s on the totality. -SLA set to 15 GB/s. The SLA value is used as a reference, it has not been -defined by OPNFV. - -TC014 ------ -The Unixbench processor test run results vary between scores 3224.4 and 3842.8, -one result each date. The average score on the total is 3659.5. -No SLA set. - -TC037 ------ -The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs -on different blades are measured when increasing the amount of UDP flows sent -between the VMs using pktgen as packet generator tool. - -Round trip times and packet throughput between VMs can typically be affected by -the amount of flows set up and result in higher RTT and less PPS throughput. - -The RTT results are similar throughout the different test dates and runs at -approx. 15 ms. Some test runs show an increase with many flows, in the range -towards 16 to 17 ms. One exception standing out is Feb. 15 where the average -RTT is stable at approx. 13 ms. The PPS results are not as consistent as the -RTT results. -In some test runs when running with less than approx. 10000 flows the PPS -throughput is normally flatter compared to when running with more flows, after -which the PPS throughput decreases. Around 20 percent decrease in the worst -case. For the other test runs there is however no significant change to the PPS -throughput when the number of flows are increased. In some test runs the PPS -is also greater with 1000000 flows compared to other test runs where the PPS -result is less with only 2 flows. - -The average PPS throughput in the different runs varies between 414000 and -452000 PPS. The total amount of packets in each test run is approx. 7500000 to -8200000 packets. One test run Feb. 15 sticks out with a PPS average of -558000 and approx. 1100000 packets in total (same as the on mentioned earlier -for RTT results). - -There are lost packets reported in most of the test runs. There is no observed -correlation between the amount of flows and the amount of lost packets. -The lost amount of packets normally range between 100 and 1000 per test run, -but there are spikes in the range of 10000 lost packets as well, and even -more in a rare cases. - -CPU utilization statistics are collected during UDP flows sent between the VMs -using pktgen as packet generator tool. The average measurements for CPU -utilization ratio vary between 1% to 2%. The peak of CPU utilization ratio -appears around 7%. - -TC069 ------ -Between test dates, the average measurements for memory bandwidth vary between -22.6 and 29.1 GB/s. Within each test run the results vary more, with a minimal -BW of 20.0 GB/s and maximum of 29.5 GB/s on the totality. -SLA set to 6 GB/s. The SLA value is used as a reference, it has not been -defined by OPNFV. - -TC070 ------ -The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs -on different blades are measured when increasing the amount of UDP flows sent -between the VMs using pktgen as packet generator tool. - -Round trip times and packet throughput between VMs can typically be affected by -the amount of flows set up and result in higher RTT and less PPS throughput. - -The RTT results are similar throughout the different test dates and runs at -approx. 15 ms. Some test runs show an increase with many flows, in the range -towards 16 to 17 ms. One exception standing out is Feb. 15 where the average -RTT is stable at approx. 13 ms. The PPS results are not as consistent as the -RTT results. -In some test runs when running with less than approx. 10000 flows the PPS -throughput is normally flatter compared to when running with more flows, after -which the PPS throughput decreases. Around 20 percent decrease in the worst -case. For the other test runs there is however no significant change to the PPS -throughput when the number of flows are increased. In some test runs the PPS -is also greater with 1000000 flows compared to other test runs where the PPS -result is less with only 2 flows. - -The average PPS throughput in the different runs varies between 414000 and -452000 PPS. The total amount of packets in each test run is approx. 7500000 to -8200000 packets. One test run Feb. 15 sticks out with a PPS average of -558000 and approx. 1100000 packets in total (same as the on mentioned earlier -for RTT results). - -There are lost packets reported in most of the test runs. There is no observed -correlation between the amount of flows and the amount of lost packets. -The lost amount of packets normally range between 100 and 1000 per test run, -but there are spikes in the range of 10000 lost packets as well, and even -more in a rare cases. - -Memory utilization statistics are collected during UDP flows sent between the -VMs using pktgen as packet generator tool. The average measurements for memory -utilization vary between 225MB to 246MB. The peak of memory utilization appears -around 340MB. - -TC071 ------ -The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs -on different blades are measured when increasing the amount of UDP flows sent -between the VMs using pktgen as packet generator tool. - -Round trip times and packet throughput between VMs can typically be affected by -the amount of flows set up and result in higher RTT and less PPS throughput. - -The RTT results are similar throughout the different test dates and runs at -approx. 15 ms. Some test runs show an increase with many flows, in the range -towards 16 to 17 ms. One exception standing out is Feb. 15 where the average -RTT is stable at approx. 13 ms. The PPS results are not as consistent as the -RTT results. -In some test runs when running with less than approx. 10000 flows the PPS -throughput is normally flatter compared to when running with more flows, after -which the PPS throughput decreases. Around 20 percent decrease in the worst -case. For the other test runs there is however no significant change to the PPS -throughput when the number of flows are increased. In some test runs the PPS -is also greater with 1000000 flows compared to other test runs where the PPS -result is less with only 2 flows. - -The average PPS throughput in the different runs varies between 414000 and -452000 PPS. The total amount of packets in each test run is approx. 7500000 to -8200000 packets. One test run Feb. 15 sticks out with a PPS average of -558000 and approx. 1100000 packets in total (same as the on mentioned earlier -for RTT results). - -There are lost packets reported in most of the test runs. There is no observed -correlation between the amount of flows and the amount of lost packets. -The lost amount of packets normally range between 100 and 1000 per test run, -but there are spikes in the range of 10000 lost packets as well, and even -more in a rare cases. - -Cache utilization statistics are collected during UDP flows sent between the -VMs using pktgen as packet generator tool. The average measurements for cache -utilization vary between 205MB to 212MB. - -TC072 ------ -The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs -on different blades are measured when increasing the amount of UDP flows sent -between the VMs using pktgen as packet generator tool. - -Round trip times and packet throughput between VMs can typically be affected by -the amount of flows set up and result in higher RTT and less PPS throughput. - -The RTT results are similar throughout the different test dates and runs at -approx. 15 ms. Some test runs show an increase with many flows, in the range -towards 16 to 17 ms. One exception standing out is Feb. 15 where the average -RTT is stable at approx. 13 ms. The PPS results are not as consistent as the -RTT results. -In some test runs when running with less than approx. 10000 flows the PPS -throughput is normally flatter compared to when running with more flows, after -which the PPS throughput decreases. Around 20 percent decrease in the worst -case. For the other test runs there is however no significant change to the PPS -throughput when the number of flows are increased. In some test runs the PPS -is also greater with 1000000 flows compared to other test runs where the PPS -result is less with only 2 flows. - -The average PPS throughput in the different runs varies between 414000 and -452000 PPS. The total amount of packets in each test run is approx. 7500000 to -8200000 packets. One test run Feb. 15 sticks out with a PPS average of -558000 and approx. 1100000 packets in total (same as the on mentioned earlier -for RTT results). - -There are lost packets reported in most of the test runs. There is no observed -correlation between the amount of flows and the amount of lost packets. -The lost amount of packets normally range between 100 and 1000 per test run, -but there are spikes in the range of 10000 lost packets as well, and even -more in a rare cases. - -Network utilization statistics are collected during UDP flows sent between the -VMs using pktgen as packet generator tool. Total number of packets received per -second was average on 200 kpps and total number of packets transmitted per -second was average on 600 kpps. - -Detailed test results ---------------------- -The scenario was run on LF POD1_ with: -Apex -OpenStack Mitaka -OpenVirtualSwitch 2.5.90 -OpenDayLight Beryllium - -Rationale for decisions ------------------------ -Pass - -Tests were successfully executed and metrics collected. -No SLA was verified. To be decided on in next release of OPNFV. - - -Joid -==== - -.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main -.. _POD5: https://wiki.opnfv.org/pharos?&#community_test_labs - - -Overview of test results ------------------------- - -See Grafana_ for viewing test result metrics for each respective test case. It -is possible to chose which specific scenarios to look at, and then to zoom in -on the details of each run test scenario as well. - -All of the test case results below are based on 4 scenario test runs, each run -on the Intel POD5_ between September 11 and 14 in 2016. - -TC002 ------ -The round-trip-time (RTT) between 2 VMs on different blades is measured using -ping. Most test run measurements result on average between 1.59 and 1.70 ms. -Two test runs have reached the same greater RTT spike of 3.06 ms, which are -1.66 and 1.70 ms average, but only one has the lower RTT of 1.35 ms. The other -two runs have no similar spike at all. To be able to draw conclusions more runs -should be made. SLA set to be 10 ms. The SLA value is used as a reference, it -has not been defined by OPNFV. - -TC005 ------ -The IO read bandwidth actually refers to the storage throughput and the -greatest IO read bandwidth of the four runs is 173.3 MB/s. The IO read -bandwidth of the four runs looks similar on different four days, with an -average between 32.7 and 60.4 MB/s. One of the runs has a minimum BW of 429 -KM/s and other has a maximum BW of 173.3 MB/s. The SLA of read bandwidth sets -to be 400 MB/s, which is used as a reference, and it has not been defined by -OPNFV. - -TC010 ------ -The tool we use to measure memory read latency is lmbench, which is a series of -micro benchmarks intended to measure basic operating system and hardware system -metrics. The memory read latency of the four runs is 1.1 ns on average. The -variations within each test run are different, some vary from a large range and -others have a small change. For example, the largest change is on September 14, -the memory read latency of which is ranging from 1.12 ns to 1.22 ns. However, -the results on September 12 change very little, which range from 1.14 ns to -1.17 ns. The SLA sets to be 30 ns. The SLA value is used as a reference, it has -not been defined by OPNFV. - -TC011 ------ -Iperf3 is a tool for evaluating the pocket delay variation between 2 VMs on -different blades. The reported pocket delay variations of the four test runs -differ from each other. The results on September 13 within the date look -similar and the values are between 0.0087 and 0.0190 ms, which is 0.0126 ms on -average. However, on the fourth day, the pocket delay variation has a large -wide change within the date, which ranges from 0.0032 ms to 0.0121 ms and has -the minimum average value. The pocket delay variations of other two test runs -look relatively similar, which are 0.0076 ms and 0.0152 ms on average. The SLA -value sets to be 10 ms. The SLA value is used as a reference, it has not been -defined by OPNFV. - -TC012 ------ -Lmbench is also used to measure the memory read and write bandwidth, in which -we use bw_mem to obtain the results. Among the four test runs, the memory -bandwidth within the second day almost keep stable, which is 11.58 GB/s on -average. And the memory bandwidth of the fourth day look similar as that of the -second day, both of which remain stable. The other two test runs relatively -change from a large wide range, in which the minimum memory bandwidth is 11.22 -GB/s and the maximum bandwidth is 16.65 GB/s with an average bandwidth of about -12.20 GB/s. Here SLA set to be 15 GB/s. The SLA value is used as a reference, -it has not been defined by OPNFV. - -TC014 ------ -The Unixbench is used to measure processing speed, that is instructions per -second. It can be seen from the dashboard that the processing test results -vary from scores 3272 to 3444, and there is only one result one date. The -overall average score is 3371. No SLA set. - -TC037 ------ -The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs -on different blades are measured when increasing the amount of UDP flows sent -between the VMs using pktgen as packet generator tool. - -Round trip times and packet throughput between VMs can typically be affected by -the amount of flows set up and result in higher RTT and less PPS throughput. - -The mean packet throughput of the four test runs is 119.85, 128.02, 121.40 and -126.08 kpps, of which the result of the second is the highest. The RTT results -of all the test runs keep flat at approx. 37 ms. It is obvious that the PPS -results are not as consistent as the RTT results. - -The No. flows of the four test runs are 240 k on average and the PPS results -look a little waved since the largest packet throughput is 184 kpps and the -minimum throughput is 49 K respectively. - -There are no errors of packets received in the four runs, but there are still -lost packets in all the test runs. The RTT values obtained by ping of the four -runs have the similar average vaue, that is 38 ms, of which the worest RTT is -93 ms on Sep. 14th. - -CPU load of the four test runs have a large change, since the minimum value and -the peak of CPU load is 0 percent and 51 percent respectively. And the best -result is obtained on Sep. 14th. - -TC069 ------ -With the block size changing from 1 kb to 512 kb, the memory write bandwidth -tends to become larger first and then smaller within every run test, which -rangs from 22.3 GB/s to 26.8 GB/s and then to 18.5 GB/s on average. Since the -test id is one, it is that only the INT memory write bandwidth is tested. On -the whole, when the block size is 8 kb and 16 kb, the memory write bandwidth -look similar with a minimal BW of 22.5 GB/s and peak value of 28.7 GB/s. SLA -sets to be 7 GB/s. The SLA value is used as a a reference, it has not been -defined by OPNFV. - -TC070 ------ -The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs -on different blades are measured when increasing the amount of UDP flows sent -between the VMs using pktgen as packet generator tool. - -Round trip times and packet throughput between VMs can typically be affected by -the amount of flows set up and result in higher RTT and less PPS throughput. - -The network latency is measured by ping, and the results of the four test runs -look similar with each other. Within each test run, the maximum RTT can reach -more than 80 ms and the average RTT is usually approx. 38 ms. On the whole, the -average RTTs of the four runs keep flat. - -Memory utilization is measured by free, which can display amount of free and -used memory in the system. The largest amount of used memory is 268 MiB on Sep -14, which also has the largest minimum memory. Besides, the rest three test -runs have the similar used memory. On the other hand, the free memory of the -four runs have the same smallest minimum value, that is about 223 MiB, and the -maximum free memory of three runs have the similar result, that is 337 MiB, -except that on Sep. 14th, whose maximum free memory is 254 MiB. On the whole, -all the test runs have similar average free memory. - -Network throughput and packet loss can be measured by pktgen, which is a tool -in the network for generating traffic loads for network experiments. The mean -network throughput of the four test runs seem quite different, ranging from -119.85 kpps to 128.02 kpps. The average number of flows in these tests is -24000, and each run has a minimum number of flows of 2 and a maximum number -of flows of 1.001 Mil. At the same time, the corresponding packet throughput -differ between 49.4k and 193.3k with an average packet throughput of approx. -125k. On the whole, the PPS results seem consistent. Within each test run of -the four runs, when number of flows becomes larger, the packet throughput seems -not larger in the meantime. - -TC071 ------ -The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs -on different blades are measured when increasing the amount of UDP flows sent -between the VMs using pktgen as packet generator tool. - -Round trip times and packet throughput between VMs can typically be affected by -the amount of flows set up and result in higher RTT and less PPS throughput. - -The network latency is measured by ping, and the results of the four test runs -look similar with each other. Within each test run, the maximum RTT can reach -more than 94 ms and the average RTT is usually approx. 35 ms. On the whole, the -average RTTs of the four runs keep flat. - -Cache utilization is measured by cachestat, which can display size of cache and -buffer in the system. Cache utilization statistics are collected during UDP -flows sent between the VMs using pktgen as packet generator tool.The largest -cache size is 212 MiB in the four runs, and the smallest cache size is 75 MiB. -On the whole, the average cache size of the four runs is approx. 208 MiB. -Meanwhile, the tread of the buffer size looks similar with each other. - -Packet throughput can be measured by pktgen, which is a tool in the network for -generating traffic loads for network experiments. The mean packet throughput of -the four test runs seem quite different, ranging from 119.85 kpps to 128.02 -kpps. The average number of flows in these tests is 239.7k, and each run has a -minimum number of flows of 2 and a maximum number of flows of 1.001 Mil. At the -same time, the corresponding packet throughput differ between 49.4k and 193.3k -with an average packet throughput of approx. 125k. On the whole, the PPS results -seem consistent. Within each test run of the four runs, when number of flows -becomes larger, the packet throughput seems not larger in the meantime. - -TC072 ------ -The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs -on different blades are measured when increasing the amount of UDP flows sent -between the VMs using pktgen as packet generator tool. - -Round trip times and packet throughput between VMs can typically be affected by -the amount of flows set up and result in higher RTT and less PPS throughput. - -The RTT results are similar throughout the different test dates and runs at -approx. 32 ms. The PPS results are not as consistent as the RTT results. - -Network utilization is measured by sar, that is system activity reporter, which -can display the average statistics for the time since the system was started. -Network utilization statistics are collected during UDP flows sent between the -VMs using pktgen as packet generator tool. The largest total number of packets -transmitted per second differs from each other, in which the smallest number of -packets transmitted per second is 6 pps on Sep. 12ed and the largest of that is -210.8 kpps. Meanwhile, the largest total number of packets received per second -differs from each other, in which the smallest number of packets received per -second is 2 pps on Sep. 13rd and the largest of that is 250.2 kpps. - -In some test runs when running with less than approx. 90000 flows the PPS -throughput is normally flatter compared to when running with more flows, after -which the PPS throughput decreases. For the other test runs there is however no -significant change to the PPS throughput when the number of flows are -increased. In some test runs the PPS is also greater with 1000000 flows -compared to other test runs where the PPS result is less with only 2 flows. - -There are lost packets reported in most of the test runs. There is no observed -correlation between the amount of flows and the amount of lost packets. -The lost amount of packets normally differs a lot per test run. - -Detailed test results ---------------------- -The scenario was run on Intel POD5_ with: -Joid -OpenStack Mitaka -OpenVirtualSwitch 2.5.90 -OpenDayLight Beryllium - -Rationale for decisions ------------------------ -Pass - -Conclusions and recommendations -------------------------------- -Tests were successfully executed and metrics collected. -No SLA was verified. To be decided on in next release of OPNFV. - - diff --git a/docs/results/os-nosdn-nofeature-noha.rst b/docs/results/os-nosdn-nofeature-noha.rst deleted file mode 100644 index 8b7c184bb..000000000 --- a/docs/results/os-nosdn-nofeature-noha.rst +++ /dev/null @@ -1,259 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 - - -======================================== -Test Results for os-nosdn-nofeature-noha -======================================== - -.. toctree:: - :maxdepth: 2 - - -Joid -===== - -.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main -.. _POD5: https://wiki.opnfv.org/pharos?&#community_test_labs - -Overview of test results ------------------------- - -See Grafana_ for viewing test result metrics for each respective test case. It -is possible to chose which specific scenarios to look at, and then to zoom in -on the details of each run test scenario as well. - -All of the test case results below are based on 4 scenario test runs, each run -on the Intel POD5_ between September 12 and 15 in 2016. - -TC002 ------ -The round-trip-time (RTT) between 2 VMs on different blades is measured using -ping. Most test run measurements result on average between 1.50 and 1.68 ms. -Only one test run has reached greatest RTT spike of 2.92 ms, which has -the smallest RTT of 1.06 ms. The other three runs have no similar spike at all, -the minimum and average RTTs of which are approx. 1.50 ms and 1.68 ms. SLA set to -be 10 ms. The SLA value is used as a reference, it has not been defined by -OPNFV. - -TC005 ------ -The IO read bandwidth actually refers to the storage throughput, which is -measured by fio and the greatest IO read bandwidth of the four runs is 177.5 -MB/s. The IO read bandwidth of the four runs looks similar on different four -days, with an average between 46.7 and 62.5 MB/s. One of the runs has a minimum -BW of 680 KM/s and other has a maximum BW of 177.5 MB/s. The SLA of read -bandwidth sets to be 400 MB/s, which is used as a reference, and it has not -been defined by OPNFV. - -The results of storage IOPS for the four runs look similar with each other. The -test runs all have an approx. 1.55 K/s for IO reading with an minimum value of -less than 60 times per second. - -TC010 ------ -The tool we use to measure memory read latency is lmbench, which is a series of -micro benchmarks intended to measure basic operating system and hardware system -metrics. The memory read latency of the four runs is between 1.134 ns and 1.227 -ns on average. The variations within each test run are quite different, some -vary from a large range and others have a small change. For example, the -largest change is on September 15, the memory read latency of which is ranging -from 1.116 ns to 1.393 ns. However, the results on September 12 change very -little, which mainly keep flat and range from 1.124 ns to 1.55 ns. The SLA sets -to be 30 ns. The SLA value is used as a reference, it has not been defined by -OPNFV. - -TC011 ------ -Iperf3 is a tool for evaluating the pocket delay variation between 2 VMs on -different blades. The reported pocket delay variations of the four test runs -differ from each other. The results on September 13 within the date look -similar and the values are between 0.0213 and 0.0225 ms, which is 0.0217 ms on -average. However, on the third day, the packet delay variation has a large -wide change within the date, which ranges from 0.008 ms to 0.0225 ms and has -the minimum value. On Sep. 12, the packet delay is quite long, for the value is -between 0.0236 and 0.0287 ms and it also has the maximum packet delay of 0.0287 -ms. The packet delay of the last test run is 0.0151 ms on average. The SLA -value sets to be 10 ms. The SLA value is used as a reference, it has not been -defined by OPNFV. - -TC012 ------ -Lmbench is also used to measure the memory read and write bandwidth, in which -we use bw_mem to obtain the results. Among the four test runs, the memory -bandwidth of three test runs almost keep stable within each run, which is -11.65, 11.57 and 11.64 GB/s on average. However, the memory read and write -bandwidth on Sep. 14 has a large range, for it ranges from 11.36 GB/s to 16.68 -GB/s. Here SLA set to be 15 GB/s. The SLA value is used as a reference, it has -not been defined by OPNFV. - -TC014 ------ -The Unixbench is used to evaluate the IaaS processing speed with regards to -score of single cpu running and parallel running. It can be seen from the -dashboard that the processing test results vary from scores 3222 to 3585, and -there is only one result one date. No SLA set. - -TC037 ------ -The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs -on different blades are measured when increasing the amount of UDP flows sent -between the VMs using pktgen as packet generator tool. - -Round trip times and packet throughput between VMs can typically be affected by -the amount of flows set up and result in higher RTT and less PPS throughput. - -The mean packet throughput of the four test runs is 124.8, 160.1, 113.8 and -137.3 kpps, of which the result of the second is the highest. The RTT results -of all the test runs keep flat at approx. 37 ms. It is obvious that the PPS -results are not as consistent as the RTT results. - -The No. flows of the four test runs are 240 k on average and the PPS results -look a little waved since the largest packet throughput is 243.1 kpps and the -minimum throughput is 37.6 kpps respectively. - -There are no errors of packets received in the four runs, but there are still -lost packets in all the test runs. The RTT values obtained by ping of the four -runs have the similar average vaue, that is between 32 ms and 41 ms, of which -the worest RTT is 155 ms on Sep. 14th. - -CPU load is measured by mpstat, and CPU load of the four test runs seem a -little similar, since the minimum value and the peak of CPU load is between 0 -percent and 9 percent respectively. And the best result is obtained on Sep. -15th, with an CPU load of nine percent. - -TC069 ------ -With the block size changing from 1 kb to 512 kb, the memory write bandwidth -tends to become larger first and then smaller within every run test, which -rangs from 22.4 GB/s to 26.5 GB/s and then to 18.6 GB/s on average. Since the -test id is one, it is that only the INT memory write bandwidth is tested. On -the whole, when the block size is 8 kb and 16 kb, the memory write bandwidth -look similar with a minimal BW of 22.5 GB/s and peak value of 28.7 GB/s. And -then with the block size becoming larger, the memory write bandwidth tends to -decrease. SLA sets to be 7 GB/s. The SLA value is used as a a reference, it has -not been defined by OPNFV. - -TC070 ------ -The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs -on different blades are measured when increasing the amount of UDP flows sent -between the VMs using pktgen as packet generator tool. - -Round trip times and packet throughput between VMs can typically be affected by -the amount of flows set up and result in higher RTT and less PPS throughput. - -The network latency is measured by ping, and the results of three test runs look -similar with each other, and Within these test runs, the maximum RTT can reach -95 ms and the average RTT is usually approx. 36 ms. The network latency tested -on Sep. 14 shows that it has a peak latency of 155 ms. But on the whole, the -average RTTs of the four runs keep flat. - -Memory utilization is measured by free, which can display amount of free and -used memory in the system. The largest amount of used memory is 270 MiB on Sep -13, which also has the smallest minimum memory utilization. Besides, the rest -three test runs have the similar used memory with an average memory usage of -264 MiB. On the other hand, the free memory of the four runs have the same -smallest minimum value, that is about 223 MiB, and the maximum free memory of -three runs have the similar result, that is 226 MiB, except that on Sep. 13th, -whose maximum free memory is 273 MiB. On the whole, all the test runs have -similar average free memory. - -Network throughput and packet loss can be measured by pktgen, which is a tool -in the network for generating traffic loads for network experiments. The mean -network throughput of the four test runs seem quite different, ranging from -119.85 kpps to 128.02 kpps. The average number of flows in these tests is -240000, and each run has a minimum number of flows of 2 and a maximum number -of flows of 1.001 Mil. At the same time, the corresponding packet throughput -differ between 38k and 243k with an average packet throughput of approx. 134k. -On the whole, the PPS results seem consistent. Within each test run of the four -runs, when number of flows becomes larger, the packet throughput seems not -larger in the meantime. - -TC071 ------ -The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs -on different blades are measured when increasing the amount of UDP flows sent -between the VMs using pktgen as packet generator tool. - -Round trip times and packet throughput between VMs can typically be affected by -the amount of flows set up and result in higher RTT and less PPS throughput. - -The network latency is measured by ping, and the results of the four test runs -look similar with each other. Within each test run, the maximum RTT can reach -79 ms and the average RTT is usually approx. 35 ms. On the whole, the average -RTTs of the four runs keep flat. - -Cache utilization is measured by cachestat, which can display size of cache and -buffer in the system. Cache utilization statistics are collected during UDP -flows sent between the VMs using pktgen as packet generator tool.The largest -cache size is 214 MiB in the four runs, and the smallest cache size is 100 MiB. -On the whole, the average cache size of the four runs is approx. 210 MiB. -Meanwhile, the tread of the buffer size looks similar with each other. On the -other hand, the mean buffer size of the four runs keep flat, since they have a -minimum value of approx. 7 MiB and a maximum value of 8 MiB, with an average -value of about 8 MiB. - -Packet throughput can be measured by pktgen, which is a tool in the network for -generating traffic loads for network experiments. The mean packet throughput of -the four test runs seem quite different, ranging from 113.8 kpps to 124.8 kpps. -The average number of flows in these tests is 240k, and each run has a minimum -number of flows of 2 and a maximum number of flows of 1.001 Mil. At the same -time, the corresponding packet throughput differ between 47.6k and 243.1k with -an average packet throughput between 113.8k and 160.1k. Within each test run of -the four runs, when number of flows becomes larger, the packet throughput seems -not larger in the meantime. - -TC072 ------ -The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs -on different blades are measured when increasing the amount of UDP flows sent -between the VMs using pktgen as packet generator tool. - -Round trip times and packet throughput between VMs can typically be affected by -the amount of flows set up and result in higher RTT and less PPS throughput. - -The RTT results are similar throughout the different test dates and runs -between 0 ms and 79 ms with an average leatency of approx. 35 ms. The PPS -results are not as consistent as the RTT results, for the mean packet -throughput of the four runs differ from 113.8 kpps to 124.8 kpps. - -Network utilization is measured by sar, that is system activity reporter, which -can display the average statistics for the time since the system was started. -Network utilization statistics are collected during UDP flows sent between the -VMs using pktgen as packet generator tool. The largest total number of packets -transmitted per second look similar on the first three runs with a minimum -number of 10 pps and a maximum number of 97 kpps, except the one on Sep. 15th, -in which the number of packets transmitted per second is 10 pps. Meanwhile, the -largest total number of packets received per second differs from each other, -in which the smallest number of packets received per second is 1 pps and the -largest of that is 276 kpps. - -In some test runs when running with less than approx. 90000 flows the PPS -throughput is normally flatter compared to when running with more flows, after -which the PPS throughput decreases. For the other test runs there is however no -significant change to the PPS throughput when the number of flows are -increased. In some test runs the PPS is also greater with 1000000 flows -compared to other test runs where the PPS result is less with only 2 flows. - -There are lost packets reported in most of the test runs. There is no observed -correlation between the amount of flows and the amount of lost packets. -The lost amount of packets normally differs a lot per test run. - -Detailed test results ---------------------- -The scenario was run on Intel POD5_ with: -Joid -OpenStack Mitaka -OpenVirtualSwitch 2.5.90 -OpenDayLight Beryllium - -Rationale for decisions ------------------------ -Pass - -Conclusions and recommendations -------------------------------- -Tests were successfully executed and metrics collected. -No SLA was verified. To be decided on in next release of OPNFV. diff --git a/docs/results/os-odl_l2-bgpvpn-ha.rst b/docs/results/os-odl_l2-bgpvpn-ha.rst deleted file mode 100644 index 2bd6dc35d..000000000 --- a/docs/results/os-odl_l2-bgpvpn-ha.rst +++ /dev/null @@ -1,53 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 - - -==================================== -Test Results for os-odl_l2-bgpvpn-ha -==================================== - -.. toctree:: - :maxdepth: 2 - - -fuel -==== - -.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main -.. _POD2: https://wiki.opnfv.org/pharos?&#community_test_labs - -Overview of test results ------------------------- - -See Grafana_ for viewing test result metrics for each respective test case. It -is possible to chose which specific scenarios to look at, and then to zoom in -on the details of each run test scenario as well. - -All of the test case results below are based on 4 scenario test runs, each run -on the Ericsson POD2_ between September 7 and 11 in 2016. - -TC043 ------ -The round-trip-time (RTT) between 2 nodes is measured using -ping. Most test run measurements result on average between 0.21 and 0.28 ms. -A few runs start with a 0.32 - 0.35 ms RTT spike (This could be because of -normal ARP handling). To be able to draw conclusions more runs should be made. -SLA set to 10 ms. The SLA value is used as a reference, it has not been defined -by OPNFV. - -Detailed test results ---------------------- -The scenario was run on Ericsson POD2_ with: -Fuel 9.0 -OpenStack Mitaka -OpenVirtualSwitch 2.5.90 -OpenDayLight Beryllium - -Rationale for decisions ------------------------ -Pass - -Tests were successfully executed and metrics collected. -No SLA was verified. To be decided on in next release of OPNFV. - diff --git a/docs/results/os-odl_l2-nofeature-ha.rst b/docs/results/os-odl_l2-nofeature-ha.rst deleted file mode 100644 index ac0c5bb59..000000000 --- a/docs/results/os-odl_l2-nofeature-ha.rst +++ /dev/null @@ -1,743 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 - - -======================================= -Test Results for os-odl_l2-nofeature-ha -======================================= - -.. toctree:: - :maxdepth: 2 - - -apex -==== - -.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main -.. _POD1: https://wiki.opnfv.org/pharos?&#community_test_labs - -Overview of test results ------------------------- - -See Grafana_ for viewing test result metrics for each respective test case. It -is possible to chose which specific scenarios to look at, and then to zoom in -on the details of each run test scenario as well. - -All of the test case results below are based on 4 scenario test runs, each run -on the LF POD1_ between September 14 and 17 in 2016. - -TC002 ------ -The round-trip-time (RTT) between 2 VMs on different blades is measured using -ping. Most test run measurements result on average between 0.49 ms and 0.60 ms. -Only one test run has reached greatest RTT spike of 0.93 ms. Meanwhile, the -smallest network latency is 0.33 ms, which is obtained on Sep. 14th. -SLA set to be 10 ms. The SLA value is used as a reference, it has not been -defined by OPNFV. - -TC005 ------ -The IO read bandwidth actually refers to the storage throughput, which is -measured by fio and the greatest IO read bandwidth of the four runs is 416 -MB/s. The IO read bandwidth of all four runs looks similar, with an average -between 128 and 131 MB/s. One of the runs has a minimum BW of 497 KB/s. The SLA -of read bandwidth sets to be 400 MB/s, which is used as a reference, and it has -not been defined by OPNFV. - -The results of storage IOPS for the four runs look similar with each other. The -IO read times per second of the four test runs have an average value at 1k per -second, and meanwhile, the minimum result is only 45 times per second. - -TC010 ------ -The tool we use to measure memory read latency is lmbench, which is a series of -micro benchmarks intended to measure basic operating system and hardware system -metrics. The memory read latency of the four runs is between 1.0859 ns and -1.0869 ns on average. The variations within each test run are quite different, -some vary from a large range and others have a small change. For example, the -largest change is on September 14th, the memory read latency of which is ranging -from 1.091 ns to 1.086 ns. However. -The SLA sets to be 30 ns. The SLA value is used as a reference, it has not been -defined by OPNFV. - -TC011 ------ -Packet delay variation between 2 VMs on different blades is measured using -Iperf3. On the first two test runs the reported packet delay variation varies between -0.0037 and 0.0740 ms, with an average delay variation between 0.0096 ms and 0.0321. -On the second date the delay variation varies between 0.0063 and 0.0096 ms, with -an average delay variation of 0.0124 - 0.0141 ms. - -TC012 ------ -Lmbench is also used to measure the memory read and write bandwidth, in which -we use bw_mem to obtain the results. Among the four test runs, the trend of -three memory bandwidth almost look similar, which all have a narrow range, and -the average result is 19.88 GB/s. Here SLA set to be 15 GB/s. The SLA value is -used as a reference, it has not been defined by OPNFV. - -TC014 ------ -The Unixbench is used to evaluate the IaaS processing speed with regards to -score of single cpu running and parallel running. It can be seen from the -dashboard that the processing test results vary from scores 3754k to 3831k, and -there is only one result one date. No SLA set. - -TC037 ------ -The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs -on different blades are measured when increasing the amount of UDP flows sent -between the VMs using pktgen as packet generator tool. - -Round trip times and packet throughput between VMs can typically be affected by -the amount of flows set up and result in higher RTT and less PPS throughput. - -The mean packet throughput of the four test runs is between 307.3 kpps and -447.1 kpps, of which the result of the third run is the highest. The RTT -results of all the test runs keep flat at approx. 15 ms. It is obvious that the -PPS results are not as consistent as the RTT results. - -The No. flows of the four test runs are 240 k on average and the PPS results -look a little waved since the largest packet throughput is 418.1 kpps and the -minimum throughput is 326.5 kpps respectively. - -There are no errors of packets received in the four runs, but there are still -lost packets in all the test runs. The RTT values obtained by ping of the four -runs have the similar average vaue, that is approx. 15 ms. - -CPU load is measured by mpstat, and CPU load of the four test runs seem a -little similar, since the minimum value and the peak of CPU load is between 0 -percent and nine percent respectively. And the best result is obtained on Sep. -1, with an CPU load of nine percent. But on the whole, the CPU load is very -poor, since the average value is quite small. - -TC069 ------ -With the block size changing from 1 kb to 512 kb, the memory write bandwidth -tends to become larger first and then smaller within every run test, which -rangs from 28.2 GB/s to 29.5 GB/s and then to 29.2 GB/s on average. Since the -test id is one, it is that only the INT memory write bandwidth is tested. On -the whole, when the block size is 2 kb or 16 kb, the memory write bandwidth -look similar with a minimal BW of 25.8 GB/s and peak value of 28.3 GB/s. And -then with the block size becoming larger, the memory write bandwidth tends to -decrease. SLA sets to be 7 GB/s. The SLA value is used as a reference, it has -not been defined by OPNFV. - -TC070 ------ -The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs -on different blades are measured when increasing the amount of UDP flows sent -between the VMs using pktgen as packet generator tool. - -Round trip times and packet throughput between VMs can typically be affected by -the amount of flows set up and result in higher RTT and less PPS throughput. - -The network latency is measured by ping, and the results of the four test runs -look similar with each other, and within these test runs, the maximum RTT can -reach 39 ms and the average RTT is usually approx. 15 ms. The network latency -tested on Sep. 1 and Sep. 8 have a peak latency of 39 ms. But on the whole, -the average RTTs of the five runs keep flat and the network latency is -relatively short. - -Memory utilization is measured by free, which can display amount of free and -used memory in the system. The largest amount of used memory is 267 MiB for the -four runs. In general, the four test runs have very large memory utilization, -which can reach 257 MiB on average. On the other hand, for the mean free memory, -the four test runs have the similar trend with that of the mean used memory. -In general, the mean free memory change from 233 MiB to 241 MiB. - -Packet throughput and packet loss can be measured by pktgen, which is a tool -in the network for generating traffic loads for network experiments. The mean -packet throughput of the four test runs seem quite different, ranging from -305.3 kpps to 447.1 kpps. The average number of flows in these tests is -240000, and each run has a minimum number of flows of 2 and a maximum number -of flows of 1.001 Mil. At the same time, the corresponding average packet -throughput is between 354.4 kpps and 381.8 kpps. In summary, the PPS results -seem consistent. Within each test run of the four runs, when number of flows -becomes larger, the packet throughput seems not larger at the same time. - -TC071 ------ -The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs -on different blades are measured when increasing the amount of UDP flows sent -between the VMs using pktgen as packet generator tool. - -Round trip times and packet throughput between VMs can typically be affected by -the amount of flows set up and result in higher RTT and less PPS throughput. - -The network latency is measured by ping, and the results of the four test runs -look similar with each other. Within each test run, the maximum RTT is only 42 -ms and the average RTT is usually approx. 15 ms. On the whole, the average -RTTs of the four runs keep stable and the network latency is relatively small. - -Cache utilization is measured by cachestat, which can display size of cache and -buffer in the system. Cache utilization statistics are collected during UDP -flows sent between the VMs using pktgen as packet generator tool. The largest -cache size is 212 MiB, which is same for the four runs, and the smallest cache -size is 75 MiB. On the whole, the average cache size of the four runs look the -same and is between 197 MiB and 211 MiB. Meanwhile, the tread of the buffer -size keep flat, since they have a minimum value of 7 MiB and a maximum value of -8 MiB, with an average value of about 7.9 MiB. - -Packet throughput can be measured by pktgen, which is a tool in the network for -generating traffic loads for network experiments. The mean packet throughput of -the four test runs differ from 354.4 kpps to 381.8 kpps. The average number of -flows in these tests is 240k, and each run has a minimum number of flows of 2 -and a maximum number of flows of 1.001 Mil. At the same time, the corresponding -packet throughput differ between 305.3 kpps to 447.1 kpps. Within each test run -of the four runs, when number of flows becomes larger, the packet throughput -seems not larger in the meantime. - -TC072 ------ -The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs -on different blades are measured when increasing the amount of UDP flows sent -between the VMs using pktgen as packet generator tool. - -Round trip times and packet throughput between VMs can typically be affected by -the amount of flows set up and result in higher RTT and less PPS throughput. - -The RTT results are similar throughout the different test dates and runs -between 0 ms and 42 ms with an average leatency of less than 15 ms. The PPS -results are not as consistent as the RTT results, for the mean packet -throughput of the four runs differ from 354.4 kpps to 381.8 kpps. - -Network utilization is measured by sar, that is system activity reporter, which -can display the average statistics for the time since the system was started. -Network utilization statistics are collected during UDP flows sent between the -VMs using pktgen as packet generator tool. The largest total number of packets -transmitted per second look similar for three test runs, whose values change a -lot from 10 pps to 501 kpps. While results of the rest test run seem the same -and keep stable with the average number of packets transmitted per second of 10 -pps. However, the total number of packets received per second of the four runs -look similar, which have a large wide range of 2 pps to 815 kpps. - -In some test runs when running with less than approx. 251000 flows the PPS -throughput is normally flatter compared to when running with more flows, after -which the PPS throughput decreases. For the other test runs there is however no -significant change to the PPS throughput when the number of flows are -increased. In some test runs the PPS is also greater with 251000 flows -compared to other test runs where the PPS result is less with only 2 flows. - -There are lost packets reported in most of the test runs. There is no observed -correlation between the amount of flows and the amount of lost packets. -The lost amount of packets normally differs a lot per test run. - -Detailed test results ---------------------- -The scenario was run on LF POD1_ with: -Apex -OpenStack Mitaka -OpenVirtualSwitch 2.5.90 -OpenDayLight Beryllium - -Rationale for decisions ------------------------ -Pass - -Conclusions and recommendations -------------------------------- -Tests were successfully executed and metrics collected. -No SLA was verified. To be decided on in next release of OPNFV. - - - -fuel -==== - -.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main -.. _POD2: https://wiki.opnfv.org/pharos?&#community_test_labs - -Overview of test results ------------------------- - -See Grafana_ for viewing test result metrics for each respective test case. It -is possible to chose which specific scenarios to look at, and then to zoom in -on the details of each run test scenario as well. - -All of the test case results below are based on 4 scenario test runs, each run -on the Ericsson POD2_ or LF POD2_ between August 25 and 29 in 2016. - -TC002 ------ -The round-trip-time (RTT) between 2 VMs on different blades is measured using -ping. Most test run measurements result on average between 0.5 and 0.6 ms. -A few runs start with a 1 - 1.5 ms RTT spike (This could be because of normal ARP -handling). One test run has a greater RTT spike of 1.9 ms, which is the same -one with the 0.7 ms average. The other runs have no similar spike at all. -To be able to draw conclusions more runs should be made. -SLA set to 10 ms. The SLA value is used as a reference, it has not -been defined by OPNFV. - -TC005 ------ -The IO read bandwidth looks similar between different dates, with an -average between approx. 170 and 200 MB/s. Within each test run the results -vary, with a minimum 2 MB/s and maximum 838 MB/s on the totality. Most runs -have a minimum BW of 3 MB/s (two runs at 2 MB/s). The maximum BW varies more in -absolute numbers between the dates, between 617 and 838 MB/s. -SLA set to 400 MB/s. The SLA value is used as a reference, it has not been -defined by OPNFV. - -TC010 ------ -The measurements for memory latency are similar between test dates and result -in approx. 1.2 ns. The variations within each test run are similar, between -1.215 and 1.219 ns. One exception is February 16, where the average is 1.222 -and varies between 1.22 and 1.28 ns. -SLA set to 30 ns. The SLA value is used as a reference, it has not been defined -by OPNFV. - -TC011 ------ -Packet delay variation between 2 VMs on different blades is measured using -Iperf3. On the first date the reported packet delay variation varies between -0.0025 and 0.011 ms, with an average delay variation of 0.0067 ms. -On the second date the delay variation varies between 0.002 and 0.006 ms, with -an average delay variation of 0.004 ms. - -TC012 ------ -Between test dates, the average measurements for memory bandwidth vary between -17.4 and 17.9 GB/s. Within each test run the results vary more, with a minimal -BW of 16.4 GB/s and maximum of 18.2 GB/s on the totality. -SLA set to 15 GB/s. The SLA value is used as a reference, it has not been -defined by OPNFV. - -TC014 ------ -The Unixbench processor test run results vary between scores 3080 and 3240, -one result each date. The average score on the total is 3150. -No SLA set. - -TC037 ------ -The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs -on different blades are measured when increasing the amount of UDP flows sent -between the VMs using pktgen as packet generator tool. - -Round trip times and packet throughput between VMs can typically be affected by -the amount of flows set up and result in higher RTT and less PPS throughput. - -The RTT results are similar throughout the different test dates and runs at -approx. 15 ms. Some test runs show an increase with many flows, in the range -towards 16 to 17 ms. One exception standing out is Feb. 15 where the average -RTT is stable at approx. 13 ms. The PPS results are not as consistent as the -RTT results. -In some test runs when running with less than approx. 10000 flows the PPS -throughput is normally flatter compared to when running with more flows, after -which the PPS throughput decreases. Around 20 percent decrease in the worst -case. For the other test runs there is however no significant change to the PPS -throughput when the number of flows are increased. In some test runs the PPS -is also greater with 1000000 flows compared to other test runs where the PPS -result is less with only 2 flows. - -The average PPS throughput in the different runs varies between 414000 and -452000 PPS. The total amount of packets in each test run is approx. 7500000 to -8200000 packets. One test run Feb. 15 sticks out with a PPS average of -558000 and approx. 1100000 packets in total (same as the on mentioned earlier -for RTT results). - -There are lost packets reported in most of the test runs. There is no observed -correlation between the amount of flows and the amount of lost packets. -The lost amount of packets normally range between 100 and 1000 per test run, -but there are spikes in the range of 10000 lost packets as well, and even -more in a rare cases. - -CPU utilization statistics are collected during UDP flows sent between the VMs -using pktgen as packet generator tool. The average measurements for CPU -utilization ratio vary between 1% to 2%. The peak of CPU utilization ratio -appears around 7%. - -TC069 ------ -Between test dates, the average measurements for memory bandwidth vary between -15.5 and 25.4 GB/s. Within each test run the results vary more, with a minimal -BW of 9.7 GB/s and maximum of 29.5 GB/s on the totality. -SLA set to 6 GB/s. The SLA value is used as a reference, it has not been -defined by OPNFV. - -TC070 ------ -The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs -on different blades are measured when increasing the amount of UDP flows sent -between the VMs using pktgen as packet generator tool. - -Round trip times and packet throughput between VMs can typically be affected by -the amount of flows set up and result in higher RTT and less PPS throughput. - -The RTT results are similar throughout the different test dates and runs at -approx. 15 ms. Some test runs show an increase with many flows, in the range -towards 16 to 17 ms. One exception standing out is Feb. 15 where the average -RTT is stable at approx. 13 ms. The PPS results are not as consistent as the -RTT results. -In some test runs when running with less than approx. 10000 flows the PPS -throughput is normally flatter compared to when running with more flows, after -which the PPS throughput decreases. Around 20 percent decrease in the worst -case. For the other test runs there is however no significant change to the PPS -throughput when the number of flows are increased. In some test runs the PPS -is also greater with 1000000 flows compared to other test runs where the PPS -result is less with only 2 flows. - -The average PPS throughput in the different runs varies between 414000 and -452000 PPS. The total amount of packets in each test run is approx. 7500000 to -8200000 packets. One test run Feb. 15 sticks out with a PPS average of -558000 and approx. 1100000 packets in total (same as the on mentioned earlier -for RTT results). - -There are lost packets reported in most of the test runs. There is no observed -correlation between the amount of flows and the amount of lost packets. -The lost amount of packets normally range between 100 and 1000 per test run, -but there are spikes in the range of 10000 lost packets as well, and even -more in a rare cases. - -Memory utilization statistics are collected during UDP flows sent between the -VMs using pktgen as packet generator tool. The average measurements for memory -utilization vary between 225MB to 246MB. The peak of memory utilization appears -around 340MB. - -TC071 ------ -The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs -on different blades are measured when increasing the amount of UDP flows sent -between the VMs using pktgen as packet generator tool. - -Round trip times and packet throughput between VMs can typically be affected by -the amount of flows set up and result in higher RTT and less PPS throughput. - -The RTT results are similar throughout the different test dates and runs at -approx. 15 ms. Some test runs show an increase with many flows, in the range -towards 16 to 17 ms. One exception standing out is Feb. 15 where the average -RTT is stable at approx. 13 ms. The PPS results are not as consistent as the -RTT results. -In some test runs when running with less than approx. 10000 flows the PPS -throughput is normally flatter compared to when running with more flows, after -which the PPS throughput decreases. Around 20 percent decrease in the worst -case. For the other test runs there is however no significant change to the PPS -throughput when the number of flows are increased. In some test runs the PPS -is also greater with 1000000 flows compared to other test runs where the PPS -result is less with only 2 flows. - -The average PPS throughput in the different runs varies between 414000 and -452000 PPS. The total amount of packets in each test run is approx. 7500000 to -8200000 packets. One test run Feb. 15 sticks out with a PPS average of -558000 and approx. 1100000 packets in total (same as the on mentioned earlier -for RTT results). - -There are lost packets reported in most of the test runs. There is no observed -correlation between the amount of flows and the amount of lost packets. -The lost amount of packets normally range between 100 and 1000 per test run, -but there are spikes in the range of 10000 lost packets as well, and even -more in a rare cases. - -Cache utilization statistics are collected during UDP flows sent between the -VMs using pktgen as packet generator tool. The average measurements for cache -utilization vary between 205MB to 212MB. - -TC072 ------ -The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs -on different blades are measured when increasing the amount of UDP flows sent -between the VMs using pktgen as packet generator tool. - -Round trip times and packet throughput between VMs can typically be affected by -the amount of flows set up and result in higher RTT and less PPS throughput. - -The RTT results are similar throughout the different test dates and runs at -approx. 15 ms. Some test runs show an increase with many flows, in the range -towards 16 to 17 ms. One exception standing out is Feb. 15 where the average -RTT is stable at approx. 13 ms. The PPS results are not as consistent as the -RTT results. -In some test runs when running with less than approx. 10000 flows the PPS -throughput is normally flatter compared to when running with more flows, after -which the PPS throughput decreases. Around 20 percent decrease in the worst -case. For the other test runs there is however no significant change to the PPS -throughput when the number of flows are increased. In some test runs the PPS -is also greater with 1000000 flows compared to other test runs where the PPS -result is less with only 2 flows. - -The average PPS throughput in the different runs varies between 414000 and -452000 PPS. The total amount of packets in each test run is approx. 7500000 to -8200000 packets. One test run Feb. 15 sticks out with a PPS average of -558000 and approx. 1100000 packets in total (same as the on mentioned earlier -for RTT results). - -There are lost packets reported in most of the test runs. There is no observed -correlation between the amount of flows and the amount of lost packets. -The lost amount of packets normally range between 100 and 1000 per test run, -but there are spikes in the range of 10000 lost packets as well, and even -more in a rare cases. - -Network utilization statistics are collected during UDP flows sent between the -VMs using pktgen as packet generator tool. Total number of packets received per -second was average on 200 kpps and total number of packets transmitted per -second was average on 600 kpps. - -Detailed test results ---------------------- -The scenario was run on Ericsson POD2_ and LF POD2_ with: -Fuel 9.0 -OpenStack Mitaka -OpenVirtualSwitch 2.5.90 -OpenDayLight Beryllium - -Rationale for decisions ------------------------ -Pass - -Tests were successfully executed and metrics collected. -No SLA was verified. To be decided on in next release of OPNFV. - -Conclusions and recommendations -------------------------------- -The pktgen test configuration has a relatively large base effect on RTT in -TC037 compared to TC002, where there is no background load at all. Approx. -15 ms compared to approx. 0.5 ms, which is more than a 3000 percentage -difference in RTT results. -Especially RTT and throughput come out with better results than for instance -the *fuel-os-nosdn-nofeature-ha* scenario does. The reason for this should -probably be further analyzed and understood. Also of interest could be -to make further analyzes to find patterns and reasons for lost traffic. -Also of interest could be to see if there are continuous variations where -some test cases stand out with better or worse results than the general test -case. - - - -Joid -===== - -.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main -.. _POD6: https://wiki.opnfv.org/pharos?&#community_test_labs - -Overview of test results ------------------------- - -See Grafana_ for viewing test result metrics for each respective test case. It -is possible to chose which specific scenarios to look at, and then to zoom in -on the details of each run test scenario as well. - -All of the test case results below are based on 4 scenario test runs, each run -on the Intel POD6_ between September 1 and 8 in 2016. - -TC002 ------ -The round-trip-time (RTT) between 2 VMs on different blades is measured using -ping. Most test run measurements result on average between 1.01 ms and 1.88 ms. -Only one test run has reached greatest RTT spike of 1.88 ms. Meanwhile, the -smallest network latency is 1.01 ms, which is obtained on Sep. 1st. In general, -the average of network latency of the four test runs are between 1.29 ms and -1.34 ms. SLA set to be 10 ms. The SLA value is used as a reference, it has not -been defined by OPNFV. - -TC005 ------ -The IO read bandwidth actually refers to the storage throughput, which is -measured by fio and the greatest IO read bandwidth of the four runs is 183.65 -MB/s. The IO read bandwidth of the three runs looks similar, with an average -between 62.9 and 64.3 MB/s, except one on Sep. 1, for its maximum storage -throughput is only 159.1 MB/s. One of the runs has a minimum BW of 685 KB/s and -other has a maximum BW of 183.6 MB/s. The SLA of read bandwidth sets to be -400 MB/s, which is used as a reference, and it has not been defined by OPNFV. - -The results of storage IOPS for the four runs look similar with each other. The -IO read times per second of the four test runs have an average value between -1.41k per second and 1.64k per second, and meanwhile, the minimum result is -only 55 times per second. - -TC010 ------ -The tool we use to measure memory read latency is lmbench, which is a series of -micro benchmarks intended to measure basic operating system and hardware system -metrics. The memory read latency of the four runs is between 1.152 ns and 1.179 -ns on average. The variations within each test run are quite different, some -vary from a large range and others have a small change. For example, the -largest change is on September 8, the memory read latency of which is ranging -from 1.120 ns to 1.221 ns. However, the results on September 7 change very -little. The SLA sets to be 30 ns. The SLA value is used as a reference, it has -not been defined by OPNFV. - -TC011 ------ -Iperf3 is a tool for evaluating the packet delay variation between 2 VMs on -different blades. The reported packet delay variations of the four test runs -differ from each other. In general, the packet delay of the first two runs look -similar, for they both stay stable within each run. And the mean packet delay -of them are 0.0087 ms and 0.0127 ms respectively. Of the four runs, the fourth -has the worst result, because the packet delay reaches 0.0187 ms. The SLA value -sets to be 10 ms. The SLA value is used as a reference, it has not been defined -by OPNFV. - -TC012 ------ -Lmbench is also used to measure the memory read and write bandwidth, in which -we use bw_mem to obtain the results. Among the four test runs, the trend of -three memory bandwidth almost look similar, which all have a narrow range, and -the average result is 11.78 GB/s. Here SLA set to be 15 GB/s. The SLA value is -used as a reference, it has not been defined by OPNFV. - -TC014 ------ -The Unixbench is used to evaluate the IaaS processing speed with regards to -score of single cpu running and parallel running. It can be seen from the -dashboard that the processing test results vary from scores 3260k to 3328k, and -there is only one result one date. No SLA set. - -TC037 ------ -The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs -on different blades are measured when increasing the amount of UDP flows sent -between the VMs using pktgen as packet generator tool. - -Round trip times and packet throughput between VMs can typically be affected by -the amount of flows set up and result in higher RTT and less PPS throughput. - -The mean packet throughput of the four test runs is between 307.3 kpps and -447.1 kpps, of which the result of the third run is the highest. The RTT -results of all the test runs keep flat at approx. 15 ms. It is obvious that the -PPS results are not as consistent as the RTT results. - -The No. flows of the four test runs are 240 k on average and the PPS results -look a little waved since the largest packet throughput is 418.1 kpps and the -minimum throughput is 326.5 kpps respectively. - -There are no errors of packets received in the four runs, but there are still -lost packets in all the test runs. The RTT values obtained by ping of the four -runs have the similar average vaue, that is approx. 15 ms. - -CPU load is measured by mpstat, and CPU load of the four test runs seem a -little similar, since the minimum value and the peak of CPU load is between 0 -percent and nine percent respectively. And the best result is obtained on Sep. -1, with an CPU load of nine percent. But on the whole, the CPU load is very -poor, since the average value is quite small. - -TC069 ------ -With the block size changing from 1 kb to 512 kb, the memory write bandwidth -tends to become larger first and then smaller within every run test, which -rangs from 21.9 GB/s to 25.9 GB/s and then to 17.8 GB/s on average. Since the -test id is one, it is that only the INT memory write bandwidth is tested. On -the whole, when the block size is 2 kb or 16 kb, the memory write bandwidth -look similar with a minimal BW of 24.8 GB/s and peak value of 27.8 GB/s. And -then with the block size becoming larger, the memory write bandwidth tends to -decrease. SLA sets to be 7 GB/s. The SLA value is used as a reference, it has -not been defined by OPNFV. - -TC070 ------ -The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs -on different blades are measured when increasing the amount of UDP flows sent -between the VMs using pktgen as packet generator tool. - -Round trip times and packet throughput between VMs can typically be affected by -the amount of flows set up and result in higher RTT and less PPS throughput. - -The network latency is measured by ping, and the results of the four test runs -look similar with each other, and within these test runs, the maximum RTT can -reach 39 ms and the average RTT is usually approx. 15 ms. The network latency -tested on Sep. 1 and Sep. 8 have a peak latency of 39 ms. But on the whole, -the average RTTs of the five runs keep flat and the network latency is -relatively short. - -Memory utilization is measured by free, which can display amount of free and -used memory in the system. The largest amount of used memory is 267 MiB for the -four runs. In general, the four test runs have very large memory utilization, -which can reach 257 MiB on average. On the other hand, for the mean free memory, -the four test runs have the similar trend with that of the mean used memory. -In general, the mean free memory change from 233 MiB to 241 MiB. - -Packet throughput and packet loss can be measured by pktgen, which is a tool -in the network for generating traffic loads for network experiments. The mean -packet throughput of the four test runs seem quite different, ranging from -305.3 kpps to 447.1 kpps. The average number of flows in these tests is -240000, and each run has a minimum number of flows of 2 and a maximum number -of flows of 1.001 Mil. At the same time, the corresponding average packet -throughput is between 354.4 kpps and 381.8 kpps. In summary, the PPS results -seem consistent. Within each test run of the four runs, when number of flows -becomes larger, the packet throughput seems not larger at the same time. - -TC071 ------ -The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs -on different blades are measured when increasing the amount of UDP flows sent -between the VMs using pktgen as packet generator tool. - -Round trip times and packet throughput between VMs can typically be affected by -the amount of flows set up and result in higher RTT and less PPS throughput. - -The network latency is measured by ping, and the results of the four test runs -look similar with each other. Within each test run, the maximum RTT is only 42 -ms and the average RTT is usually approx. 15 ms. On the whole, the average -RTTs of the four runs keep stable and the network latency is relatively small. - -Cache utilization is measured by cachestat, which can display size of cache and -buffer in the system. Cache utilization statistics are collected during UDP -flows sent between the VMs using pktgen as packet generator tool. The largest -cache size is 212 MiB, which is same for the four runs, and the smallest cache -size is 75 MiB. On the whole, the average cache size of the four runs look the -same and is between 197 MiB and 211 MiB. Meanwhile, the tread of the buffer -size keep flat, since they have a minimum value of 7 MiB and a maximum value of -8 MiB, with an average value of about 7.9 MiB. - -Packet throughput can be measured by pktgen, which is a tool in the network for -generating traffic loads for network experiments. The mean packet throughput of -the four test runs differ from 354.4 kpps to 381.8 kpps. The average number of -flows in these tests is 240k, and each run has a minimum number of flows of 2 -and a maximum number of flows of 1.001 Mil. At the same time, the corresponding -packet throughput differ between 305.3 kpps to 447.1 kpps. Within each test run -of the four runs, when number of flows becomes larger, the packet throughput -seems not larger in the meantime. - -TC072 ------ -The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs -on different blades are measured when increasing the amount of UDP flows sent -between the VMs using pktgen as packet generator tool. - -Round trip times and packet throughput between VMs can typically be affected by -the amount of flows set up and result in higher RTT and less PPS throughput. - -The RTT results are similar throughout the different test dates and runs -between 0 ms and 42 ms with an average leatency of less than 15 ms. The PPS -results are not as consistent as the RTT results, for the mean packet -throughput of the four runs differ from 354.4 kpps to 381.8 kpps. - -Network utilization is measured by sar, that is system activity reporter, which -can display the average statistics for the time since the system was started. -Network utilization statistics are collected during UDP flows sent between the -VMs using pktgen as packet generator tool. The largest total number of packets -transmitted per second look similar for three test runs, whose values change a -lot from 10 pps to 501 kpps. While results of the rest test run seem the same -and keep stable with the average number of packets transmitted per second of 10 -pps. However, the total number of packets received per second of the four runs -look similar, which have a large wide range of 2 pps to 815 kpps. - -In some test runs when running with less than approx. 251000 flows the PPS -throughput is normally flatter compared to when running with more flows, after -which the PPS throughput decreases. For the other test runs there is however no -significant change to the PPS throughput when the number of flows are -increased. In some test runs the PPS is also greater with 251000 flows -compared to other test runs where the PPS result is less with only 2 flows. - -There are lost packets reported in most of the test runs. There is no observed -correlation between the amount of flows and the amount of lost packets. -The lost amount of packets normally differs a lot per test run. - -Detailed test results ---------------------- -The scenario was run on Intel POD6_ with: -Joid -OpenStack Mitaka -OpenVirtualSwitch 2.5.90 -OpenDayLight Beryllium - -Rationale for decisions ------------------------ -Pass - -Conclusions and recommendations -------------------------------- -Tests were successfully executed and metrics collected. -No SLA was verified. To be decided on in next release of OPNFV. - diff --git a/docs/results/os-odl_l2-sfc-ha.rst b/docs/results/os-odl_l2-sfc-ha.rst deleted file mode 100644 index e27562cae..000000000 --- a/docs/results/os-odl_l2-sfc-ha.rst +++ /dev/null @@ -1,231 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 - - -================================== -Test Results for os-odl_l2-sfc-ha -================================== - -.. toctree:: - :maxdepth: 2 - - -Fuel -===== - -.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main -.. _POD2: https://wiki.opnfv.org/pharos?&#community_test_labs - -Overview of test results ------------------------- - -See Grafana_ for viewing test result metrics for each respective test case. It -is possible to chose which specific scenarios to look at, and then to zoom in -on the details of each run test scenario as well. - -All of the test case results below are based on 4 scenario test runs, each run -on the LF POD2_ or Ericsson POD2_ between September 16 and 20 in 2016. - -TC002 ------ -The round-trip-time (RTT) between 2 VMs on different blades is measured using -ping. Most test run measurements result on average between 0.32 ms and 1.42 ms. -Only one test run on Sep. 20 has reached greatest RTT spike of 4.66 ms. -Meanwhile, the smallest network latency is 0.16 ms, which is obtained on Sep. -17th. To sum up, the curve of network latency has very small wave, which is -less than 5 ms. SLA sets to be 10 ms. The SLA value is used as a reference, it -has not been defined by OPNFV. - -TC005 ------ -The IO read bandwidth actually refers to the storage throughput, which is -measured by fio and the greatest IO read bandwidth of the four runs is 734 -MB/s. The IO read bandwidth of the first three runs looks similar, with an -average of less than 100 KB/s, except one on Sep. 20, whose maximum storage -throughput can reach 734 MB/s. The SLA of read bandwidth sets to be 400 MB/s, -which is used as a reference, and it has not been defined by OPNFV. - -The results of storage IOPS for the four runs look similar with each other. The -IO read times per second of the four test runs have an average value between -1.8k per second and 3.27k per second, and meanwhile, the minimum result is -only 60 times per second. - -TC010 ------ -The tool we use to measure memory read latency is lmbench, which is a series of -micro benchmarks intended to measure basic operating system and hardware system -metrics. The memory read latency of the four runs is between 1.085 ns and 1.218 -ns on average. The variations within each test run are quite small. For -Ericsson pod2, the average of memory latency is approx. 1.217 ms. While for LF -pod2, the average value is about 1.085 ms. It can be seen that the performance -of LF is better than Ericsson's. The SLA sets to be 30 ns. The SLA value is -used as a reference, it has not been defined by OPNFV. - -TC012 ------ -Lmbench is also used to measure the memory read and write bandwidth, in which -we use bw_mem to obtain the results. The four test runs all have a narrow range -of change with the average memory and write BW of 18.5 GB/s. Here SLA set to be -15 GB/s. The SLA value is used as a reference, it has not been defined by OPNFV. - -TC014 ------ -The Unixbench is used to evaluate the IaaS processing speed with regards to -score of single cpu running and parallel running. It can be seen from the -dashboard that the processing test results vary from scores 3209k to 3843k, and -there is only one result one date. No SLA set. - -TC037 ------ -The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs -on different blades are measured when increasing the amount of UDP flows sent -between the VMs using pktgen as packet generator tool. - -Round trip times and packet throughput between VMs can typically be affected by -the amount of flows set up and result in higher RTT and less PPS throughput. - -The mean packet throughput of the three test runs is between 439 kpps and -582 kpps, and the test run on Sep. 17th has the lowest average value of 371 -kpps. The RTT results of all the test runs keep flat at approx. 10 ms. It is -obvious that the PPS results are not as consistent as the RTT results. - -The No. flows of the four test runs are 240 k on average and the PPS results -look a little waved, since the largest packet throughput is 680 kpps and the -minimum throughput is 319 kpps respectively. - -There are no errors of packets received in the four runs, but there are still -lost packets in all the test runs. The RTT values obtained by ping of the four -runs have the similar trend of RTT with the average value of approx. 12 ms. - -CPU load is measured by mpstat, and CPU load of the four test runs seem a -little similar, since the minimum value and the peak of CPU load is between 0 -percent and ten percent respectively. And the best result is obtained on Sep. -17th, with an CPU load of ten percent. But on the whole, the CPU load is very -poor, since the average value is quite small. - -TC069 ------ -With the block size changing from 1 kb to 512 kb, the average memory write -bandwidth tends to become larger first and then smaller within every run test -for the two pods, which rangs from 25.1 GB/s to 29.4 GB/s and then to 19.2 GB/s -on average. Since the test id is one, it is that only the INT memory write -bandwidth is tested. On the whole, with the block size becoming larger, the -memory write bandwidth tends to decrease. SLA sets to be 7 GB/s. The SLA value -is used as a reference, it has not been defined by OPNFV. - -TC070 ------ -The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs -on different blades are measured when increasing the amount of UDP flows sent -between the VMs using pktgen as packet generator tool. - -Round trip times and packet throughput between VMs can typically be affected by -the amount of flows set up and result in higher RTT and less PPS throughput. - -The network latency is measured by ping, and the results of the four test runs -look similar with each other, and within these test runs, the maximum RTT can -reach 27 ms and the average RTT is usually approx. 12 ms. The network latency -tested on Sep. 27th has a peak latency of 27 ms. But on the whole, the average -RTTs of the four runs keep flat. - -Memory utilization is measured by free, which can display amount of free and -used memory in the system. The largest amount of used memory is 269 MiB for the -four runs. In general, the four test runs have very large memory utilization, -which can reach 251 MiB on average. On the other hand, for the mean free memory, -the four test runs have the similar trend with that of the mean used memory. -In general, the mean free memory change from 231 MiB to 248 MiB. - -Packet throughput and packet loss can be measured by pktgen, which is a tool -in the network for generating traffic loads for network experiments. The mean -packet throughput of the four test runs seem quite different, ranging from -371 kpps to 582 kpps. The average number of flows in these tests is -240000, and each run has a minimum number of flows of 2 and a maximum number -of flows of 1.001 Mil. At the same time, the corresponding average packet -throughput is between 319 kpps and 680 kpps. In summary, the PPS results -seem consistent. Within each test run of the four runs, when number of flows -becomes larger, the packet throughput seems not larger at the same time. - -TC071 ------ -The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs -on different blades are measured when increasing the amount of UDP flows sent -between the VMs using pktgen as packet generator tool. - -Round trip times and packet throughput between VMs can typically be affected by -the amount of flows set up and result in higher RTT and less PPS throughput. - -The network latency is measured by ping, and the results of the four test runs -look similar with each other. Within each test run, the maximum RTT is only 24 -ms and the average RTT is usually approx. 12 ms. On the whole, the average -RTTs of the four runs keep stable and the network latency is relatively small. - -Cache utilization is measured by cachestat, which can display size of cache and -buffer in the system. Cache utilization statistics are collected during UDP -flows sent between the VMs using pktgen as packet generator tool. The largest -cache size is 213 MiB, and the smallest cache size is 99 MiB, which is same for -the four runs. On the whole, the average cache size of the four runs look the -same and is between 184 MiB and 205 MiB. Meanwhile, the tread of the buffer -size keep stable, since they have a minimum value of 7 MiB and a maximum value of -8 MiB. - -Packet throughput can be measured by pktgen, which is a tool in the network for -generating traffic loads for network experiments. The mean packet throughput of -the four test runs differ from 371 kpps to 582 kpps. The average number of -flows in these tests is 240k, and each run has a minimum number of flows of 2 -and a maximum number of flows of 1.001 Mil. At the same time, the corresponding -packet throughput differ between 319 kpps to 680 kpps. Within each test run -of the four runs, when number of flows becomes larger, the packet throughput -seems not larger in the meantime. - -TC072 ------ -The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs -on different blades are measured when increasing the amount of UDP flows sent -between the VMs using pktgen as packet generator tool. - -Round trip times and packet throughput between VMs can typically be affected by -the amount of flows set up and result in higher RTT and less PPS throughput. - -The RTT results are similar throughout the different test dates and runs -between 0 ms and 24 ms with an average leatency of less than 13 ms. The PPS -results are not as consistent as the RTT results, for the mean packet -throughput of the four runs differ from 370 kpps to 582 kpps. - -Network utilization is measured by sar, that is system activity reporter, which -can display the average statistics for the time since the system was started. -Network utilization statistics are collected during UDP flows sent between the -VMs using pktgen as packet generator tool. The largest total number of packets -transmitted per second look similar for the four test runs, whose values change a -lot from 10 pps to 697 kpps. However, the total number of packets received per -second of three runs look similar, which have a large wide range of 2 pps to -1.497 Mpps, while the results on Sep. 18th and 20th have very small maximum -number of packets received per second of 817 kpps. - -In some test runs when running with less than approx. 251000 flows the PPS -throughput is normally flatter compared to when running with more flows, after -which the PPS throughput decreases. For the other test runs there is however no -significant change to the PPS throughput when the number of flows are -increased. In some test runs the PPS is also greater with 251000 flows -compared to other test runs where the PPS result is less with only 2 flows. - -There are lost packets reported in most of the test runs. There is no observed -correlation between the amount of flows and the amount of lost packets. -The lost amount of packets normally differs a lot per test run. - -Detailed test results ---------------------- -The scenario was run on Ericsson POD2_ and LF POD2_ with: -Fuel 9.0 -OpenStack Mitaka -OpenVirtualSwitch 2.5.90 -OpenDayLight Beryllium - -Rationale for decisions ------------------------ -Pass - -Conclusions and recommendations -------------------------------- -Tests were successfully executed and metrics collected. -No SLA was verified. To be decided on in next release of OPNFV. diff --git a/docs/results/os-onos-nofeature-ha.rst b/docs/results/os-onos-nofeature-ha.rst deleted file mode 100644 index d8b3ace5f..000000000 --- a/docs/results/os-onos-nofeature-ha.rst +++ /dev/null @@ -1,257 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 - - -====================================== -Test Results for os-onos-nofeature-ha -====================================== - -.. toctree:: - :maxdepth: 2 - - -Joid -===== - -.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main -.. _POD6: https://wiki.opnfv.org/pharos?&#community_test_labs - -Overview of test results ------------------------- - -See Grafana_ for viewing test result metrics for each respective test case. It -is possible to chose which specific scenarios to look at, and then to zoom in -on the details of each run test scenario as well. - -All of the test case results below are based on 5 scenario test runs, each run -on the Intel POD6_ between September 13 and 16 in 2016. - -TC002 ------ -The round-trip-time (RTT) between 2 VMs on different blades is measured using -ping. Most test run measurements result on average between 1.50 and 1.68 ms. -Only one test run has reached greatest RTT spike of 2.62 ms, which has -the smallest RTT of 1.00 ms. The other four runs have no similar spike at all, -the minimum and average RTTs of which are approx. 1.06 ms and 1.32 ms. SLA set -to be 10 ms. The SLA value is used as a reference, it has not been defined by -OPNFV. - -TC005 ------ -The IO read bandwidth actually refers to the storage throughput, which is -measured by fio and the greatest IO read bandwidth of the four runs is 175.4 -MB/s. The IO read bandwidth of the four runs looks similar on different four -days, with an average between 58.1 and 62.0 MB/s, except one on Sep. 14, for -its maximum storage throughput is only 133.0 MB/s. One of the runs has a -minimum BW of 497 KM/s and other has a maximum BW of 177.4 MB/s. The SLA of read -bandwidth sets to be 400 MB/s, which is used as a reference, and it has not -been defined by OPNFV. - -The results of storage IOPS for the five runs look similar with each other. The -IO read times per second of the five test runs have an average value between -1.20 K/s and 1.61 K/s, and meanwhile, the minimum result is only 41 times per -second. - -TC010 ------ -The tool we use to measure memory read latency is lmbench, which is a series of -micro benchmarks intended to measure basic operating system and hardware system -metrics. The memory read latency of the five runs is between 1.146 ns and 1.172 -ns on average. The variations within each test run are quite different, some -vary from a large range and others have a small change. For example, the -largest change is on September 13, the memory read latency of which is ranging -from 1.152 ns to 1.221 ns. However, the results on September 14 change very -little. The SLA sets to be 30 ns. The SLA value is used as a reference, it has -not been defined by OPNFV. - -TC011 ------ -Iperf3 is a tool for evaluating the packet delay variation between 2 VMs on -different blades. The reported packet delay variations of the five test runs -differ from each other. In general, the packet delay of the first two runs look -similar, for they both stay stable within each run. And the mean packet delay of -of them are 0.07714 ms and 0.07982 ms respectively. Of the five runs, the third -has the worst result, because the packet delay reaches 0.08384 ms. The trend of -therest two runs look the same, for the average packet delay are 0.07808 ms and -0.07727 ms respectively. The SLA value sets to be 10 ms. The SLA value is used -as a reference, it has not been defined by OPNFV. - -TC012 ------ -Lmbench is also used to measure the memory read and write bandwidth, in which -we use bw_mem to obtain the results. Among the five test runs, the memory -bandwidth of last three test runs almost keep stable within each run, which is -11.64, 11.71 and 11.61 GB/s on average. However, the memory read and write -bandwidth on Sep. 13 has a large range, for it ranges from 6.68 GB/s to 11.73 -GB/s. Here SLA set to be 15 GB/s. The SLA value is used as a reference, it has -not been defined by OPNFV. - -TC014 ------ -The Unixbench is used to evaluate the IaaS processing speed with regards to -score of single cpu running and parallel running. It can be seen from the -dashboard that the processing test results vary from scores 3208 to 3314, and -there is only one result one date. No SLA set. - -TC037 ------ -The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs -on different blades are measured when increasing the amount of UDP flows sent -between the VMs using pktgen as packet generator tool. - -Round trip times and packet throughput between VMs can typically be affected by -the amount of flows set up and result in higher RTT and less PPS throughput. - -The mean packet throughput of the five test runs is between 259.6 kpps and -318.4 kpps, of which the result of the second run is the highest. The RTT -results of all the test runs keep flat at approx. 20 ms. It is obvious that the -PPS results are not as consistent as the RTT results. - -The No. flows of the five test runs are 240 k on average and the PPS results -look a little waved since the largest packet throughput is 398.9 kpps and the -minimum throughput is 250.6 kpps respectively. - -There are no errors of packets received in the five runs, but there are still -lost packets in all the test runs. The RTT values obtained by ping of the five -runs have the similar average vaue, that is between 17 ms and 22 ms, of which -the worest RTT is 53 ms on Sep. 14th. - -CPU load is measured by mpstat, and CPU load of the four test runs seem a -little similar, since the minimum value and the peak of CPU load is between 0 -percent and 10 percent respectively. And the best result is obtained on Sep. -13rd, with an CPU load of 10 percent. - -TC069 ------ -With the block size changing from 1 kb to 512 kb, the memory write bandwidth -tends to become larger first and then smaller within every run test, which -rangs from 21.6 GB/s to 26.8 GB/s and then to 18.4 GB/s on average. Since the -test id is one, it is that only the INT memory write bandwidth is tested. On -the whole, when the block size is 8 kb and 16 kb, the memory write bandwidth -look similar with a minimal BW of 23.0 GB/s and peak value of 28.6 GB/s. And -then with the block size becoming larger, the memory write bandwidth tends to -decrease. SLA sets to be 7 GB/s. The SLA value is used as a a reference, it has -not been defined by OPNFV. - -TC070 ------ -The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs -on different blades are measured when increasing the amount of UDP flows sent -between the VMs using pktgen as packet generator tool. - -Round trip times and packet throughput between VMs can typically be affected by -the amount of flows set up and result in higher RTT and less PPS throughput. - -The network latency is measured by ping, and the results of the five test runs -look similar with each other, and within these test runs, the maximum RTT can -reach 53 ms and the average RTT is usually approx. 18 ms. The network latency -tested on Sep. 14 shows that it has a peak latency of 53 ms. But on the whole, -the average RTTs of the five runs keep flat and the network latency is -relatively short. - -Memory utilization is measured by free, which can display amount of free and -used memory in the system. The largest amount of used memory is 272 MiB on Sep -14. In general, the mean used memory of the five test runs have the similar -trend and the minimum memory used size is approx. 150 MiB, and the average -used memory size is about 250 MiB. On the other hand, for the mean free memory, -the five test runs have the similar trend, whose mean free memory change from -218 MiB to 342 MiB, with an average value of approx. 38 MiB. - -Packet throughput and packet loss can be measured by pktgen, which is a tool -in the network for generating traffic loads for network experiments. The mean -packet throughput of the five test runs seem quite different, ranging from -285.29 kpps to 297.76 kpps. The average number of flows in these tests is -240000, and each run has a minimum number of flows of 2 and a maximum number -of flows of 1.001 Mil. At the same time, the corresponding packet throughput -differ between 250.6k and 398.9k with an average packet throughput between -277.2 K and 318.4 K. In summary, the PPS results seem consistent. Within each -test run of the five runs, when number of flows becomes larger, the packet -throughput seems not larger at the same time. - -TC071 ------ -The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs -on different blades are measured when increasing the amount of UDP flows sent -between the VMs using pktgen as packet generator tool. - -Round trip times and packet throughput between VMs can typically be affected by -the amount of flows set up and result in higher RTT and less PPS throughput. - -The network latency is measured by ping, and the results of the five test runs -look similar with each other. Within each test run, the maximum RTT is only 49 -ms and the average RTT is usually approx. 20 ms. On the whole, the average -RTTs of the five runs keep stable and the network latency is relatively short. - -Cache utilization is measured by cachestat, which can display size of cache and -buffer in the system. Cache utilization statistics are collected during UDP -flows sent between the VMs using pktgen as packet generator tool.The largest -cache size is 215 MiB in the four runs, and the smallest cache size is 95 MiB. -On the whole, the average cache size of the five runs change a little and is -about 200 MiB, except the one on Sep. 14th, the mean cache size is very small, -which keeps 102 MiB. Meanwhile, the tread of the buffer size keep flat, since -they have a minimum value of 7 MiB and a maximum value of 8 MiB, with an -average value of about 7.8 MiB. - -Packet throughput can be measured by pktgen, which is a tool in the network for -generating traffic loads for network experiments. The mean packet throughput of -the four test runs seem quite different, ranging from 285.29 kpps to 297.76 -kpps. The average number of flows in these tests is 239.7k, and each run has a -minimum number of flows of 2 and a maximum number of flows of 1.001 Mil. At the -same time, the corresponding packet throughput differ between 227.3k and 398.9k -with an average packet throughput between 277.2k and 318.4k. Within each test -run of the five runs, when number of flows becomes larger, the packet -throughput seems not larger in the meantime. - -TC072 ------ -The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs -on different blades are measured when increasing the amount of UDP flows sent -between the VMs using pktgen as packet generator tool. - -Round trip times and packet throughput between VMs can typically be affected by -the amount of flows set up and result in higher RTT and less PPS throughput. - -The RTT results are similar throughout the different test dates and runs - between 0 ms and 49 ms with an average leatency of less than 22 ms. The PPS -results are not as consistent as the RTT results, for the mean packet -throughput of the five runs differ from 250.6 kpps to 398.9 kpps. - -Network utilization is measured by sar, that is system activity reporter, which -can display the average statistics for the time since the system was started. -Network utilization statistics are collected during UDP flows sent between the -VMs using pktgen as packet generator tool. The largest total number of packets -transmitted per second look similar for four test runs, whose values change a -lot from 10 pps to 399 kpps, except the one on Sep. 14th, whose total number -of transmitted per second keep stable, that is 10 pps. Similarly, the total -number of packets received per second look the same for four runs, except the -one on Sep. 14th, whose value is only 10 pps. - -In some test runs when running with less than approx. 90000 flows the PPS -throughput is normally flatter compared to when running with more flows, after -which the PPS throughput decreases. For the other test runs there is however no -significant change to the PPS throughput when the number of flows are -increased. In some test runs the PPS is also greater with 250000 flows -compared to other test runs where the PPS result is less with only 2 flows. - -There are lost packets reported in most of the test runs. There is no observed -correlation between the amount of flows and the amount of lost packets. -The lost amount of packets normally differs a lot per test run. - -Detailed test results ---------------------- -The scenario was run on Intel POD6_ with: -Joid -OpenStack Mitaka -Onos Goldeneye -OpenVirtualSwitch 2.5.90 -OpenDayLight Beryllium - -Rationale for decisions ------------------------ -Pass - -Conclusions and recommendations -------------------------------- -Tests were successfully executed and metrics collected. -No SLA was verified. To be decided on in next release of OPNFV. diff --git a/docs/results/os-onos-sfc-ha.rst b/docs/results/os-onos-sfc-ha.rst deleted file mode 100644 index e52ae3d55..000000000 --- a/docs/results/os-onos-sfc-ha.rst +++ /dev/null @@ -1,517 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 - - -=============================== -Test Results for os-onos-sfc-ha -=============================== - -.. toctree:: - :maxdepth: 2 - - -fuel -==== - -.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main -.. _POD2: https://wiki.opnfv.org/pharos?&#community_test_labs - -Overview of test results ------------------------- - -See Grafana_ for viewing test result metrics for each respective test case. It -is possible to chose which specific scenarios to look at, and then to zoom in -on the details of each run test scenario as well. - -All of the test case results below are based on 4 scenario test runs, each run -on the Ericsson POD2_ or LF POD2_ between September 5 and 10 in 2016. - -TC002 ------ -The round-trip-time (RTT) between 2 VMs on different blades is measured using -ping. Most test run measurements result on average between 0.5 and 0.6 ms. -A few runs start with a 1 - 1.5 ms RTT spike (This could be because of normal ARP -handling). One test run has a greater RTT spike of 1.9 ms, which is the same -one with the 0.7 ms average. The other runs have no similar spike at all. -To be able to draw conclusions more runs should be made. -SLA set to 10 ms. The SLA value is used as a reference, it has not -been defined by OPNFV. - -TC005 ------ -The IO read bandwidth looks similar between different dates, with an -average between approx. 170 and 200 MB/s. Within each test run the results -vary, with a minimum 2 MB/s and maximum 838 MB/s on the totality. Most runs -have a minimum BW of 3 MB/s (two runs at 2 MB/s). The maximum BW varies more in -absolute numbers between the dates, between 617 and 838 MB/s. -SLA set to 400 MB/s. The SLA value is used as a reference, it has not been -defined by OPNFV. - -TC010 ------ -The measurements for memory latency are similar between test dates and result -in approx. 1.2 ns. The variations within each test run are similar, between -1.215 and 1.219 ns. One exception is February 16, where the average is 1.222 -and varies between 1.22 and 1.28 ns. -SLA set to 30 ns. The SLA value is used as a reference, it has not been defined -by OPNFV. - -TC011 ------ -Packet delay variation between 2 VMs on different blades is measured using -Iperf3. On the first date the reported packet delay variation varies between -0.0025 and 0.011 ms, with an average delay variation of 0.0067 ms. -On the second date the delay variation varies between 0.002 and 0.006 ms, with -an average delay variation of 0.004 ms. - -TC012 ------ -Between test dates, the average measurements for memory bandwidth vary between -17.4 and 17.9 GB/s. Within each test run the results vary more, with a minimal -BW of 16.4 GB/s and maximum of 18.2 GB/s on the totality. -SLA set to 15 GB/s. The SLA value is used as a reference, it has not been -defined by OPNFV. - -TC014 ------ -The Unixbench processor test run results vary between scores 3080 and 3240, -one result each date. The average score on the total is 3150. -No SLA set. - -TC037 ------ -The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs -on different blades are measured when increasing the amount of UDP flows sent -between the VMs using pktgen as packet generator tool. - -Round trip times and packet throughput between VMs can typically be affected by -the amount of flows set up and result in higher RTT and less PPS throughput. - -The RTT results are similar throughout the different test dates and runs at -approx. 15 ms. Some test runs show an increase with many flows, in the range -towards 16 to 17 ms. One exception standing out is Feb. 15 where the average -RTT is stable at approx. 13 ms. The PPS results are not as consistent as the -RTT results. -In some test runs when running with less than approx. 10000 flows the PPS -throughput is normally flatter compared to when running with more flows, after -which the PPS throughput decreases. Around 20 percent decrease in the worst -case. For the other test runs there is however no significant change to the PPS -throughput when the number of flows are increased. In some test runs the PPS -is also greater with 1000000 flows compared to other test runs where the PPS -result is less with only 2 flows. - -The average PPS throughput in the different runs varies between 414000 and -452000 PPS. The total amount of packets in each test run is approx. 7500000 to -8200000 packets. One test run Feb. 15 sticks out with a PPS average of -558000 and approx. 1100000 packets in total (same as the on mentioned earlier -for RTT results). - -There are lost packets reported in most of the test runs. There is no observed -correlation between the amount of flows and the amount of lost packets. -The lost amount of packets normally range between 100 and 1000 per test run, -but there are spikes in the range of 10000 lost packets as well, and even -more in a rare cases. - -CPU utilization statistics are collected during UDP flows sent between the VMs -using pktgen as packet generator tool. The average measurements for CPU -utilization ratio vary between 1% to 2%. The peak of CPU utilization ratio -appears around 7%. - -TC069 ------ -Between test dates, the average measurements for memory bandwidth vary between -15.5 and 25.4 GB/s. Within each test run the results vary more, with a minimal -BW of 9.7 GB/s and maximum of 29.5 GB/s on the totality. -SLA set to 6 GB/s. The SLA value is used as a reference, it has not been -defined by OPNFV. - -TC070 ------ -The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs -on different blades are measured when increasing the amount of UDP flows sent -between the VMs using pktgen as packet generator tool. - -Round trip times and packet throughput between VMs can typically be affected by -the amount of flows set up and result in higher RTT and less PPS throughput. - -The RTT results are similar throughout the different test dates and runs at -approx. 15 ms. Some test runs show an increase with many flows, in the range -towards 16 to 17 ms. One exception standing out is Feb. 15 where the average -RTT is stable at approx. 13 ms. The PPS results are not as consistent as the -RTT results. -In some test runs when running with less than approx. 10000 flows the PPS -throughput is normally flatter compared to when running with more flows, after -which the PPS throughput decreases. Around 20 percent decrease in the worst -case. For the other test runs there is however no significant change to the PPS -throughput when the number of flows are increased. In some test runs the PPS -is also greater with 1000000 flows compared to other test runs where the PPS -result is less with only 2 flows. - -The average PPS throughput in the different runs varies between 414000 and -452000 PPS. The total amount of packets in each test run is approx. 7500000 to -8200000 packets. One test run Feb. 15 sticks out with a PPS average of -558000 and approx. 1100000 packets in total (same as the on mentioned earlier -for RTT results). - -There are lost packets reported in most of the test runs. There is no observed -correlation between the amount of flows and the amount of lost packets. -The lost amount of packets normally range between 100 and 1000 per test run, -but there are spikes in the range of 10000 lost packets as well, and even -more in a rare cases. - -Memory utilization statistics are collected during UDP flows sent between the -VMs using pktgen as packet generator tool. The average measurements for memory -utilization vary between 225MB to 246MB. The peak of memory utilization appears -around 340MB. - -TC071 ------ -The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs -on different blades are measured when increasing the amount of UDP flows sent -between the VMs using pktgen as packet generator tool. - -Round trip times and packet throughput between VMs can typically be affected by -the amount of flows set up and result in higher RTT and less PPS throughput. - -The RTT results are similar throughout the different test dates and runs at -approx. 15 ms. Some test runs show an increase with many flows, in the range -towards 16 to 17 ms. One exception standing out is Feb. 15 where the average -RTT is stable at approx. 13 ms. The PPS results are not as consistent as the -RTT results. -In some test runs when running with less than approx. 10000 flows the PPS -throughput is normally flatter compared to when running with more flows, after -which the PPS throughput decreases. Around 20 percent decrease in the worst -case. For the other test runs there is however no significant change to the PPS -throughput when the number of flows are increased. In some test runs the PPS -is also greater with 1000000 flows compared to other test runs where the PPS -result is less with only 2 flows. - -The average PPS throughput in the different runs varies between 414000 and -452000 PPS. The total amount of packets in each test run is approx. 7500000 to -8200000 packets. One test run Feb. 15 sticks out with a PPS average of -558000 and approx. 1100000 packets in total (same as the on mentioned earlier -for RTT results). - -There are lost packets reported in most of the test runs. There is no observed -correlation between the amount of flows and the amount of lost packets. -The lost amount of packets normally range between 100 and 1000 per test run, -but there are spikes in the range of 10000 lost packets as well, and even -more in a rare cases. - -Cache utilization statistics are collected during UDP flows sent between the -VMs using pktgen as packet generator tool. The average measurements for cache -utilization vary between 205MB to 212MB. - -TC072 ------ -The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs -on different blades are measured when increasing the amount of UDP flows sent -between the VMs using pktgen as packet generator tool. - -Round trip times and packet throughput between VMs can typically be affected by -the amount of flows set up and result in higher RTT and less PPS throughput. - -The RTT results are similar throughout the different test dates and runs at -approx. 15 ms. Some test runs show an increase with many flows, in the range -towards 16 to 17 ms. One exception standing out is Feb. 15 where the average -RTT is stable at approx. 13 ms. The PPS results are not as consistent as the -RTT results. -In some test runs when running with less than approx. 10000 flows the PPS -throughput is normally flatter compared to when running with more flows, after -which the PPS throughput decreases. Around 20 percent decrease in the worst -case. For the other test runs there is however no significant change to the PPS -throughput when the number of flows are increased. In some test runs the PPS -is also greater with 1000000 flows compared to other test runs where the PPS -result is less with only 2 flows. - -The average PPS throughput in the different runs varies between 414000 and -452000 PPS. The total amount of packets in each test run is approx. 7500000 to -8200000 packets. One test run Feb. 15 sticks out with a PPS average of -558000 and approx. 1100000 packets in total (same as the on mentioned earlier -for RTT results). - -There are lost packets reported in most of the test runs. There is no observed -correlation between the amount of flows and the amount of lost packets. -The lost amount of packets normally range between 100 and 1000 per test run, -but there are spikes in the range of 10000 lost packets as well, and even -more in a rare cases. - -Network utilization statistics are collected during UDP flows sent between the -VMs using pktgen as packet generator tool. Total number of packets received per -second was average on 200 kpps and total number of packets transmitted per -second was average on 600 kpps. - -Detailed test results ---------------------- -The scenario was run on Ericsson POD2_ and LF POD2_ with: -Fuel 9.0 -OpenStack Mitaka -Onos Goldeneye -OpenVirtualSwitch 2.5.90 -OpenDayLight Beryllium - -Rationale for decisions ------------------------ -Pass - -Tests were successfully executed and metrics collected. -No SLA was verified. To be decided on in next release of OPNFV. - -Conclusions and recommendations -------------------------------- -The pktgen test configuration has a relatively large base effect on RTT in -TC037 compared to TC002, where there is no background load at all. Approx. -15 ms compared to approx. 0.5 ms, which is more than a 3000 percentage -difference in RTT results. -Especially RTT and throughput come out with better results than for instance -the *fuel-os-nosdn-nofeature-ha* scenario does. The reason for this should -probably be further analyzed and understood. Also of interest could be -to make further analyzes to find patterns and reasons for lost traffic. -Also of interest could be to see if there are continuous variations where -some test cases stand out with better or worse results than the general test -case. - - -Joid -===== - -.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main -.. _POD6: https://wiki.opnfv.org/pharos?&#community_test_labs - -Overview of test results ------------------------- - -See Grafana_ for viewing test result metrics for each respective test case. It -is possible to chose which specific scenarios to look at, and then to zoom in -on the details of each run test scenario as well. - -All of the test case results below are based on 4 scenario test runs, each run -on the Intel POD6_ between September 8 and 11 in 2016. - -TC002 ------ -The round-trip-time (RTT) between 2 VMs on different blades is measured using -ping. Most test run measurements result on average between 1.35 ms and 1.57 ms. -Only one test run has reached greatest RTT spike of 2.58 ms. Meanwhile, the -smallest network latency is 1.11 ms, which is obtained on Sep. 11st. In -general, the average of network latency of the four test runs are between 1.35 -ms and 1.57 ms. SLA set to be 10 ms. The SLA value is used as a reference, it -has not been defined by OPNFV. - -TC005 ------ -The IO read bandwidth actually refers to the storage throughput, which is -measured by fio and the greatest IO read bandwidth of the four runs is 175.4 -MB/s. The IO read bandwidth of the three runs looks similar, with an average -between 43.7 and 56.3 MB/s, except one on Sep. 8, for its maximum storage -throughput is only 107.9 MB/s. One of the runs has a minimum BW of 478 KM/s and -other has a maximum BW of 168.6 MB/s. The SLA of read bandwidth sets to be -400 MB/s, which is used as a reference, and it has not been defined by OPNFV. - -The results of storage IOPS for the four runs look similar with each other. The -IO read times per second of the four test runs have an average value between -978 per second and 1.20 K/s, and meanwhile, the minimum result is only 36 times -per second. - -TC010 ------ -The tool we use to measure memory read latency is lmbench, which is a series of -micro benchmarks intended to measure basic operating system and hardware system -metrics. The memory read latency of the four runs is between 1.164 ns and 1.244 -ns on average. The variations within each test run are quite different, some -vary from a large range and others have a small change. For example, the -largest change is on September 10, the memory read latency of which is ranging -from 1.128 ns to 1.381 ns. However, the results on September 11 change very -little. The SLA sets to be 30 ns. The SLA value is used as a reference, it has -not been defined by OPNFV. - -TC011 ------ -Iperf3 is a tool for evaluating the packet delay variation between 2 VMs on -different blades. The reported packet delay variations of the four test runs -differ from each other. In general, the packet delay of two runs look similar, -for they both stay stable within each run. And the mean packet delay of them -are 0.0772 ms and 0.0788 ms respectively. Of the four runs, the fourth has the -worst result, because the packet delay reaches 0.0838 ms. The rest one has a -large wide range from 0.0666 ms to 0.0798 ms. The SLA value sets to be 10 ms. -The SLA value is used as a reference, it has not been defined by OPNFV. - -TC012 ------ -Lmbench is also used to measure the memory read and write bandwidth, in which -we use bw_mem to obtain the results. Among the four test runs, the trend of the -memory bandwidth almost look similar, which all have a large wide range, and -the minimum and maximum results are 9.02 GB/s and 18.14 GB/s. Here SLA set to -be 15 GB/s. The SLA value is used as a reference, it has not been defined by -OPNFV. - -TC014 ------ -The Unixbench is used to evaluate the IaaS processing speed with regards to -score of single cpu running and parallel running. It can be seen from the -dashboard that the processing test results vary from scores 3395 to 3475, and -there is only one result one date. No SLA set. - -TC037 ------ -The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs -on different blades are measured when increasing the amount of UDP flows sent -between the VMs using pktgen as packet generator tool. - -Round trip times and packet throughput between VMs can typically be affected by -the amount of flows set up and result in higher RTT and less PPS throughput. - -The mean packet throughput of the four test runs is between 362.1 kpps and -363.5 kpps, of which the result of the third run is the highest. The RTT -results of all the test runs keep flat at approx. 17 ms. It is obvious that the -PPS results are not as consistent as the RTT results. - -The No. flows of the four test runs are 240 k on average and the PPS results -look a little waved since the largest packet throughput is 418.1 kpps and the -minimum throughput is 326.5 kpps respectively. - -There are no errors of packets received in the four runs, but there are still -lost packets in all the test runs. The RTT values obtained by ping of the four -runs have the similar average vaue, that is approx. 17 ms, of which the worst -RTT is 39 ms on Sep. 11st. - -CPU load is measured by mpstat, and CPU load of the four test runs seem a -little similar, since the minimum value and the peak of CPU load is between 0 -percent and nine percent respectively. And the best result is obtained on Sep. -10, with an CPU load of nine percent. - -TC069 ------ -With the block size changing from 1 kb to 512 kb, the memory write bandwidth -tends to become larger first and then smaller within every run test, which -rangs from 25.9 GB/s to 26.6 GB/s and then to 18.1 GB/s on average. Since the -test id is one, it is that only the INT memory write bandwidth is tested. On -the whole, when the block size is from 2 kb to 16 kb, the memory write -bandwidth look similar with a minimal BW of 22.1 GB/s and peak value of 28.6 -GB/s. And then with the block size becoming larger, the memory write bandwidth -tends to decrease. SLA sets to be 7 GB/s. The SLA value is used as a reference, -it has not been defined by OPNFV. - -TC070 ------ -The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs -on different blades are measured when increasing the amount of UDP flows sent -between the VMs using pktgen as packet generator tool. - -Round trip times and packet throughput between VMs can typically be affected by -the amount of flows set up and result in higher RTT and less PPS throughput. - -The network latency is measured by ping, and the results of the four test runs -look similar with each other, and within these test runs, the maximum RTT can -reach 39 ms and the average RTT is usually approx. 17 ms. The network latency -tested on Sep. 11 shows that it has a peak latency of 39 ms. But on the whole, -the average RTTs of the five runs keep flat and the network latency is -relatively short. - -Memory utilization is measured by free, which can display amount of free and -used memory in the system. The largest amount of used memory is 270 MiB on the -first two runs. In general, the mean used memory of two test runs have very -large memory utilization, which can reach 264 MiB on average. And the other two -runs have a large wide range of memory usage with the minimum value of 150 MiB -and the maximum value of 270 MiB. On the other hand, for the mean free memory, -the four test runs have the similar trend with that of the mean used memory. -In general, the mean free memory change from 220 MiB to 342 MiB. - -Packet throughput and packet loss can be measured by pktgen, which is a tool -in the network for generating traffic loads for network experiments. The mean -packet throughput of the four test runs seem quite different, ranging from -326.5 kpps to 418.1 kpps. The average number of flows in these tests is -240000, and each run has a minimum number of flows of 2 and a maximum number -of flows of 1.001 Mil. At the same time, the corresponding packet throughput -differ between 326.5 kpps and 418.1 kpps with an average packet throughput between -361.7 kpps and 363.5 kpps. In summary, the PPS results seem consistent. Within each -test run of the four runs, when number of flows becomes larger, the packet -throughput seems not larger at the same time. - -TC071 ------ -The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs -on different blades are measured when increasing the amount of UDP flows sent -between the VMs using pktgen as packet generator tool. - -Round trip times and packet throughput between VMs can typically be affected by -the amount of flows set up and result in higher RTT and less PPS throughput. - -The network latency is measured by ping, and the results of the four test runs -look similar with each other. Within each test run, the maximum RTT is only 47 -ms and the average RTT is usually approx. 15 ms. On the whole, the average -RTTs of the four runs keep stable and the network latency is relatively small. - -Cache utilization is measured by cachestat, which can display size of cache and -buffer in the system. Cache utilization statistics are collected during UDP -flows sent between the VMs using pktgen as packet generator tool. The largest -cache size is 214 MiB, which is same for the four runs, and the smallest cache -size is 94 MiB. On the whole, the average cache size of the four runs look the -same and is between 198 MiB and 207 MiB. Meanwhile, the tread of the buffer -size keep flat, since they have a minimum value of 7 MiB and a maximum value of -8 MiB, with an average value of about 7.9 MiB. - -Packet throughput can be measured by pktgen, which is a tool in the network for -generating traffic loads for network experiments. The mean packet throughput of -the four test runs seem quite the same, which is approx. 363 kpps. The average -number of flows in these tests is 240k, and each run has a minimum number of -flows of 2 and a maximum number of flows of 1.001 Mil. At the same time, the -corresponding packet throughput differ between 327 kpps and 418 kpps with an -average packet throughput of about 363 kpps. Within each test run of the four -runs, when number of flows becomes larger, the packet throughput seems not -larger in the meantime. - -TC072 ------ -The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs -on different blades are measured when increasing the amount of UDP flows sent -between the VMs using pktgen as packet generator tool. - -Round trip times and packet throughput between VMs can typically be affected by -the amount of flows set up and result in higher RTT and less PPS throughput. - -The RTT results are similar throughout the different test dates and runs -between 0 ms and 47 ms with an average leatency of less than 16 ms. The PPS -results are not as consistent as the RTT results, for the mean packet -throughput of the four runs differ from 361.7 kpps to 365.0 kpps. - -Network utilization is measured by sar, that is system activity reporter, which -can display the average statistics for the time since the system was started. -Network utilization statistics are collected during UDP flows sent between the -VMs using pktgen as packet generator tool. The largest total number of packets -transmitted per second look similar for two test runs, whose values change a -lot from 10 pps to 432 kpps. While results of the other test runs seem the same -and keep stable with the average number of packets transmitted per second of 10 -pps. However, the total number of packets received per second of the four runs -look similar, which have a large wide range of 2 pps to 657 kpps. - -In some test runs when running with less than approx. 250000 flows the PPS -throughput is normally flatter compared to when running with more flows, after -which the PPS throughput decreases. For the other test runs there is however no -significant change to the PPS throughput when the number of flows are -increased. In some test runs the PPS is also greater with 250000 flows -compared to other test runs where the PPS result is less with only 2 flows. - -There are lost packets reported in most of the test runs. There is no observed -correlation between the amount of flows and the amount of lost packets. -The lost amount of packets normally differs a lot per test run. - -Detailed test results ---------------------- -The scenario was run on Intel POD6_ with: -Joid -OpenStack Mitaka -Onos Goldeneye -OpenVirtualSwitch 2.5.90 -OpenDayLight Beryllium - -Rationale for decisions ------------------------ -Pass - -Conclusions and recommendations -------------------------------- -Tests were successfully executed and metrics collected. -No SLA was verified. To be decided on in next release of OPNFV. - diff --git a/docs/results/overview.rst b/docs/results/overview.rst deleted file mode 100644 index b4a050545..000000000 --- a/docs/results/overview.rst +++ /dev/null @@ -1,106 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Ericsson AB and others. - -Yardstick test tesult document overview -======================================= - -.. _`Yardstick user guide`: artifacts.opnfv.org/yardstick/docs/userguide/index.html -.. _Dashboard: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main -.. _Jenkins: https://build.opnfv.org/ci/view/yardstick/ -.. _Scenarios: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-scenarios - -This document provides an overview of the results of test cases developed by -the OPNFV Yardstick Project, executed on OPNFV community labs. - -Yardstick project is described in `Yardstick user guide`_. - -Yardstick is run systematically at the end of an OPNFV fresh installation. -The system under test (SUT) is installed by the installer Apex, Compass, Fuel -or Joid on Performance Optimized Datacenter (POD); One single installer per -POD. All the runnable test cases are run sequentially. The installer and the -POD are considered to evaluate whether the test case can be run or not. That is -why all the number of test cases may vary from 1 installer to another and from -1 POD to POD. - -OPNFV CI provides automated build, deploy and testing for -the software developed in OPNFV. Unless stated, the reported tests are -automated via Jenkins Jobs. Yardsrick test results from OPNFV Continous -Integration can be found in the following dashboard: - -* *Yardstick* Dashboard_: uses influx DB to store Yardstick CI test results and - Grafana for visualization (user: opnfv/ password: opnfv) - -The results of executed test cases are available in Dashboard_ and all logs are -stored in Jenkins_. - -It was not possible to execute the entire Yardstick test cases suite on the -PODs assigned for release verification over a longer period of time, due to -continuous work on the software components and blocking faults either on -environment, features or test framework. - -The list of scenarios supported by each installer can be described as follows: - -+-------------------------+---------+---------+---------+---------+ -| Scenario | Apex | Compass | Fuel | Joid | -+=========================+=========+=========+=========+=========+ -| os-nosdn-nofeature-noha | | | X | X | -+-------------------------+---------+---------+---------+---------+ -| os-nosdn-nofeature-ha | X | X | X | X | -+-------------------------+---------+---------+---------+---------+ -| os-odl_l2-nofeature-ha | X | X | X | X | -+-------------------------+---------+---------+---------+---------+ -| os-odl_l2-nofeature-noha| | | X | | -+-------------------------+---------+---------+---------+---------+ -| os-odl_l3-nofeature-ha | X | X | X | | -+-------------------------+---------+---------+---------+---------+ -| os-odl_l3-nofeature-noha| | | X | | -+-------------------------+---------+---------+---------+---------+ -| os-onos-sfc-ha | X | X | X | X | -+-------------------------+---------+---------+---------+---------+ -| os-onos-sfc-noha | | | X | | -+-------------------------+---------+---------+---------+---------+ -| os-onos-nofeature-ha | X | X | X | X | -+-------------------------+---------+---------+---------+---------+ -| os-onos-nofeature-noha | | | X | | -+-------------------------+---------+---------+---------+---------+ -| os-odl_l2-sfc-ha | | | X | | -+-------------------------+---------+---------+---------+---------+ -| os-odl_l2-sfc-noha | X | X | X | | -+-------------------------+---------+---------+---------+---------+ -| os-odl_l2-bgpvpn-ha | X | | X | | -+-------------------------+---------+---------+---------+---------+ -| os-odl_l2-bgpvpn-noha | | X | X | | -+-------------------------+---------+---------+---------+---------+ -| os-nosdn-kvm-ha | | | X | | -+-------------------------+---------+---------+---------+---------+ -| os-nosdn-kvm-noha | | X | X | | -+-------------------------+---------+---------+---------+---------+ -| os-nosdn-ovs-ha | | | X | | -+-------------------------+---------+---------+---------+---------+ -| os-nosdn-ovs-noha | X | | X | | -+-------------------------+---------+---------+---------+---------+ -| os-ocl-nofeature-ha | | | | | -+-------------------------+---------+---------+---------+---------+ -| os-nosdn-lxd-ha | | | | X | -+-------------------------+---------+---------+---------+---------+ -| os-nosdn-lxd-noha | | | | X | -+-------------------------+---------+---------+---------+---------+ -| os-odl_l2-fdio-noha | X | | | | -+-------------------------+---------+---------+---------+---------+ -| os-odl_l2-moon-ha | | X | | | -+-------------------------+---------+---------+---------+---------+ - -To qualify for release, the scenarios must have deployed and been successfully -tested in four consecutive installations to establish stability of deployment -and feature capability. It is a recommendation to run Yardstick test -cases over a longer period of time in order to better understand the behavior -of the system under test. - -References ----------- - -* IEEE Std 829-2008. "Standard for Software and System Test Documentation". - -* OPNFV Colorado release note for Yardstick. diff --git a/docs/results/results.rst b/docs/results/results.rst deleted file mode 100644 index 04c6b9f87..000000000 --- a/docs/results/results.rst +++ /dev/null @@ -1,57 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 - -Results listed by scenario -========================== - -The following sections describe the yardstick results as evaluated for the -Colorado release scenario validation runs. Each section describes the -determined state of the specific scenario as deployed in the Colorado -release process. - -Scenario Results -================ - -.. _Dashboard: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main -.. _Jenkins: https://build.opnfv.org/ci/view/yardstick/ - -The following documents contain results of Yardstick test cases executed on -OPNFV labs, triggered by OPNFV CI pipeline, documented per scenario. - - -.. toctree:: - :maxdepth: 1 - - os-nosdn-nofeature-ha.rst - os-nosdn-nofeature-noha.rst - os-odl_l2-nofeature-ha.rst - os-odl_l2-bgpvpn-ha.rst - os-odl_l2-sfc-ha.rst - os-nosdn-kvm-ha.rst - os-onos-nofeature-ha.rst - os-onos-sfc-ha.rst - -Test results of executed tests are avilable in Dashboard_ and logs in Jenkins_. - - -Feature Test Results -==================== - -The following features were verified by Yardstick test cases: - - * IPv6 - - * HA (see :doc:`yardstick-opnfv-ha`) - - * KVM - - * Parser - - * Virtual Traffic Classifier (see :doc:`yardstick-opnfv-vtc`) - - * StorPerf - -.. note:: The test cases for IPv6 and Parser Projects are included in the - compass scenario. - diff --git a/docs/results/yardstick-opnfv-ha.rst b/docs/results/yardstick-opnfv-ha.rst deleted file mode 100644 index ef1617342..000000000 --- a/docs/results/yardstick-opnfv-ha.rst +++ /dev/null @@ -1,118 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 - - -=================================== -Test Results for yardstick-opnfv-ha -=================================== - -.. toctree:: - :maxdepth: 2 - -Details -======= -There are two test cases, TC019 and TC025, for high availability (HA) test of -OPNFV platform, and both test cases were executed in CMCC's lab with 3+2 HA -deployment, where the installer is Arno SR1 release of fuel. - - -TC019 ------ -This test case verifies the high availability of the openstack service, i.e. -"nova-api", on controller node. -There are one attacker, "kill-process" which kills all "nova-api" processes, -and two monitors, "openstack-cmd" monitoring "nova-api" service by openstack -command "nova image-list", while "process" monitor checks whether "nova-api" -process is running. Please see the test case description document for detail. - -Overview of test results ------------------------- -The service_outage_time of "nova image-list" is 0 seconds, while the -process_recover_time of "nova-api" is 300 seconds which equals the running time -of this test case, that means the "nova-api" service can't automatiocally -recover itself. - -Detailed test results ---------------------- -All "nova-api" process on the selected controller node was killed, and results -of two monitors were collected. Specifically, the results of "nova image-list" -request were collected from compute node and the status of "nova-api" process -were collected from the selected controller node. - -Each monitor was running in a single process. The running time of each monitor -was about 300 seconds with no waiting time between twice monitor running. For -"nova image-list", the running times is 127, that's to say there is one -openstack command request every 2.36 seconds; while the running times is 141 -for "nova-api" process checking, the accurancy is about 2.13 seconds. - -The outage time of each monitor, which the name is "service_outage_time" for -"openstack-cmd" monitor and "process_recover_time" for "process" monitor, is -defined as the duration from the begin time of the first failure request to the -end time of the last failure request. - -All "nova image-list" requestes were success, so the service_outage_time of -"nova image-list" is 0 second, while "nova-api" processes were not running for -all "process" checking, so the process_recover_time of "nova-api" is 300s. - -Rationale for decisions ------------------------ -The service_outage_time is 0 second, that means the failover time of openstack -service is less than 2.36s, which is the period of each request. However, the -process_recover_time equals test case runing time, that means the process is -not automatically recovered, so this test case is fail. - - -TC025 ------ -This test case verifies the high availability of controller node. When one of -the controller node abnormally shutdown, the service provided should be OK. -There are one attacker, "kill-process" which kills all "nova-api" processes, -and two "openstack-cmd" monitors, one monitoring openstack command -"nova image-list" and the other monitoring "neutron router-list". -Please see the test case description document for detail. - -Overview of test results ------------------------- -The both service_outage_time of "nova image-list" and "neutron router-list" -were 0 second. - -Detailed test results ---------------------- -A selected controller node was shutdown, and results of two monitors were -collected from compute node. - -The return results of "nova image-list" and "neutron router-list" requests from -compute node were collected, then the failure requestion time were statistic -service_outage_time of corresponding service. - -Each monitor was running in a single process. The running time of each monitor -was about 300 seconds with no waiting time between twice monitor running. For -"nova image-list", the running times is 49, that's to say there is one -openstack command request every 6.12 seconds; while the running times is 28 for -"neutron router-list", the accurancy is about 10.71 seconds. - -The "service_outage_time" for two monitors is defined as the duration from the -begin time of the first failure request to the end time of the last failure -request. - -All "nova image-list" and "neutron router-list" requestes were success, so the -service_outage_time of both two monitor were 0 second. - -Rationale for decisions ------------------------ -As service_outage_time of all monitors are 0 second, that means there are none -failure request in this test case running time, this test case is passed. - - -Conclusions and recommendations -------------------------------- -The TC019 shows the killed process will be not automatically recovered, which -should be imporved. - -There are several improvement points for HA test: -a) Running test cases in different enveriment deployed by different installers, -such as compass4nfv, apex and joid, with different versiones. -b) The period of each request is a little long, it needs more accurate test -method. -c) More test cases with different faults and different monitors are needed. diff --git a/docs/results/yardstick-opnfv-kvm.rst b/docs/results/yardstick-opnfv-kvm.rst deleted file mode 100644 index ee4c6390b..000000000 --- a/docs/results/yardstick-opnfv-kvm.rst +++ /dev/null @@ -1,38 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 - - -==================================== -Test Results for yardstick-opnfv-kvm -==================================== - -.. toctree:: - :maxdepth: 2 - - -Details -======= - -.. after this doc is filled, remove all comments and include the scenario in -.. results.rst by removing the comment on the file name. - - -Overview of test results ------------------------- - -.. general on metrics collected, number of iterations - -Detailed test results ---------------------- - -.. info on lab, installer, scenario - -Rationale for decisions ------------------------ -.. result analysis, pass/fail - -Conclusions and recommendations -------------------------------- - -.. did the expected behavior occured? diff --git a/docs/results/yardstick-opnfv-parser.rst b/docs/results/yardstick-opnfv-parser.rst deleted file mode 100644 index 520d867ef..000000000 --- a/docs/results/yardstick-opnfv-parser.rst +++ /dev/null @@ -1,38 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 - - -======================================= -Test Results for yardstick-opnfv-parser -======================================= - -.. toctree:: - :maxdepth: 2 - - -Details -======= - -.. after this doc is filled, remove all comments and include the scenario in -.. results.rst by removing the comment on the file name. - - -Overview of test results ------------------------- - -.. general on metrics collected, number of iterations - -Detailed test results ---------------------- - -.. info on lab, installer, scenario - -Rationale for decisions ------------------------ -.. result analysis, pass/fail - -Conclusions and recommendations -------------------------------- - -.. did the expected behavior occured? diff --git a/docs/results/yardstick-opnfv-vtc.rst b/docs/results/yardstick-opnfv-vtc.rst deleted file mode 100644 index 059b5491f..000000000 --- a/docs/results/yardstick-opnfv-vtc.rst +++ /dev/null @@ -1,248 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 - -.. _Dashboard006: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-tc006 -.. _Dashboard007: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-tc007 -.. _Dashboard020: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-tc020 -.. _Dashboard021: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-tc021 -.. _DashboardVTC: http://testresults.opnfv.org/grafana/dashboard/db/vtc-dashboard -==================================== -Test Results for yardstick-opnfv-vtc -==================================== - -.. toctree:: - :maxdepth: 2 - - -Details -======= - -.. after this doc is filled, remove all comments and include the scenario in -.. results.rst by removing the comment on the file name. - - -Overview of test results ------------------------- - -.. general on metrics collected, number of iterations - -The virtual Traffic Classifier (vtc) Scenario supported by Yardstick is used by 4 Test Cases: - -- TC006 -- TC007 -- TC020 -- TC021 - - -* TC006 - -TC006 is the Virtual Traffic Classifier Data Plane Throughput Benchmarking Test. -It collects measures about the end-to-end throughput supported by the -virtual Traffic Classifier (vTC). -Results of the test are shown in the Dashboard006_ -The throughput is expressed as percentage of the available bandwidth on the NIC. - - -* TC007 - -TC007 is the Virtual Traffic Classifier Data Plane Throughput Benchmarking in presence of -noisy neighbors Test. -It collects measures about the end-to-end throughput supported by the -virtual Traffic Classifier when a user-defined number of noisy neighbors is deployed. -Results of the test are shown in the Dashboard007_ -The throughput is expressed as percentage of the available bandwidth on the NIC. - - -* TC020 - -TC020 is the Virtual Traffic Classifier Instantiation Test. -It verifies that a newly instantiated vTC is alive and functional and its instantiation -is correctly supported by the underlying infrastructure. -Results of the test are shown in the Dashboard020_ - - -* TC021 - -TC021 is the Virtual Traffic Classifier Instantiation in presence of noisy neighbors Test. -It verifies that a newly instantiated vTC is alive and functional and its instantiation -is correctly supported by the underlying infrastructure when noisy neighbors are present. -Results of the test are shown in the Dashboard021_ - -* Generic - -In the Generic scenario the Virtual Traffic Classifier is running on a standard Openstack -setup and traffic is being replayed from a neighbor VM. The traffic sent contains -various protocols and applications, and the VTC identifies them and exports the data. -Results of the test are shown in the DashboardVTC. - -Detailed test results ---------------------- - -* TC006 - -The results for TC006 have been obtained using the following test case -configuration: - -- Context: Dummy -- Scenario: vtc_throughput -- Network Techology: SR-IOV -- vTC Flavor: m1.large - - -* TC007 - -The results for TC007 have been obtained using the following test case -configuration: - -- Context: Dummy -- Scenario: vtc_throughput_noisy -- Network Techology: SR-IOV -- vTC Flavor: m1.large -- Number of noisy neighbors: 2 -- Number of cores per neighbor: 2 -- Amount of RAM per neighbor: 1G - - -* TC020 - -The results for TC020 have been obtained using the following test case -configuration: - -The results listed in previous section have been obtained using the following -test case configuration: - -- Context: Dummy -- Scenario: vtc_instantiation_validation -- Network Techology: SR-IOV -- vTC Flavor: m1.large - - -* TC021 - -The results listed in previous section have been obtained using the following -test case configuration: - -- Context: Dummy -- Scenario: vtc_instantiation_validation -- Network Techology: SR-IOV -- vTC Flavor: m1.large -- Number of noisy neighbors: 2 -- Number of cores per neighbor: 2 -- Amount of RAM per neighbor: 1G - - -For all the test cases, the user can specify different values for the parameters. - -* Generic - -The results listed in the previous section have been obtained, using a -standard Openstack setup. -The user can replay his/her own traffic and see the corresponding results. - -Rationale for decisions ------------------------ - -* TC006 - -The result of the test is a number between 0 and 100 which represents the percentage of bandwidth -available on the NIC that corresponds to the supported throughput by the vTC. - - -* TC007 - -The result of the test is a number between 0 and 100 which represents the percentage of bandwidth -available on the NIC that corresponds to the supported throughput by the vTC. - -* TC020 - -The execution of the test is done as described in the following: - -- The vTC is deployed on the OpenStack testbed; -- Some traffic is sent to the vTC; -- The vTC changes the header of the packets and sends them back to the packet generator; -- The packet generator checks that all the packets are received correctly and have been changed -correctly by the vTC. - -The test is declared as PASSED if all the packets are correcly received by the packet generator -and they have been modified by the virtual Traffic Classifier as required. - - -* TC021 - -The execution of the test is done as described in the following: - -- The vTC is deployed on the OpenStack testbed; -- The noisy neighbors are deployed as requested by the user; -- Some traffic is sent to the vTC; -- The vTC change the header of the packets and sends them back to the packet generator; -- The packet generator checks that all the packets are received correctly and have been changed -correctly by the vTC - -The test is declared as PASSED if all the packets are correcly received by the packet generator -and they have been modified by the virtual Traffic Classifier as required. - -* Generic - -The execution of the test consists of the following actions: - -- The vTC is deployed on the OpenStack testbed; -- The traffic generator VM is deployed on the Openstack Testbed; -- Traffic data are relevant to the network setup; -- Traffic is sent to the vTC; - - - -Conclusions and recommendations -------------------------------- - -* TC006 - -The obtained results show that the virtual Traffic Classifier can support up to 4 Gbps -(40% of the available bandwidth) correspond to the expected behaviour of the virtual -Traffic Classifier. -Using the configuration with SR-IOV and large flavor, the expected throughput should -generally be in the range between 3 and 4 Gbps. - - -* TC007 - -These results correspond to the configuration in which the virtual Traffic Classifier uses SR-IOV -Virtual Functions and the flavor is set to large for the virtual machine. -The throughput is in the range between 2.5 Gbps and 3.7 Gbps. -This shows that the effect of 2 noisy neighbors reduces the throughput of -the service between 10 and 20%. -Increasing number of neihbours would have a higher impact on the performance. - - -* TC020 - -The obtained results correspond to the expected behaviour of the virtual Traffic Classifier. -Using the configuration with SR-IOV and large flavor, the expected result is that the vTC is -correctly instantiated, it is able to receive and send packets using SR-IOV technology -and to forward packets back to the packet generator changing the TCP/IP header as required. - - -* TC021 - -The obtained results correspond to the expected behaviour of the virtual Traffic Classifier. -Using the configuration with SR-IOV and large flavor, the expected result is that the vTC is -correctly instantiated, it is able to receive and send packets using SR-IOV technology -and to forward packets back to the packet generator changing the TCP/IP header as required, -also in presence of noisy neighbors. - -* Generic - -The obtained results correspond to the expected behaviour of the virtual Traffic Classifier. -Using the aforementioned configuration the expected application protocols are identified -and their traffic statistics are demonstrated in the DashboardVTC, a group of popular -applications is selected to demonstrate the sound operation of the vTC. -The demonstrated application protocols are: -- HTTP -- Skype -- Bittorrent -- Youtube -- Dropbox -- Twitter -- Viber -- iCloud diff --git a/docs/testing/user/userguide/01-introduction.rst b/docs/testing/user/userguide/01-introduction.rst new file mode 100755 index 000000000..0e0eea002 --- /dev/null +++ b/docs/testing/user/userguide/01-introduction.rst @@ -0,0 +1,79 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Ericsson AB and others. + +============ +Introduction +============ + +**Welcome to Yardstick's documentation !** + +.. _Pharos: https://wiki.opnfv.org/pharos +.. _Yardstick: https://wiki.opnfv.org/yardstick +.. _Presentation: https://wiki.opnfv.org/download/attachments/2925202/opnfv_summit_-_yardstick_project.pdf?version=1&modificationDate=1458848320000&api=v2 +Yardstick_ is an OPNFV Project. + +The project's goal is to verify infrastructure compliance, from the perspective +of a Virtual Network Function (:term:`VNF`). + +The Project's scope is the development of a test framework, *Yardstick*, test +cases and test stimuli to enable Network Function Virtualization Infrastructure +(:term:`NFVI`) verification. +The Project also includes a sample :term:`VNF`, the Virtual Traffic Classifier +(:term:`VTC`) and its experimental framework, *ApexLake* ! + +*Yardstick* is used in OPNFV for verifying the OPNFV infrastructure and some of +the OPNFV features. The *Yardstick* framework is deployed in several OPNFV +community labs. It is *installer*, *infrastructure* and *application* +independent. + +.. seealso:: Pharos_ for information on OPNFV community labs and this + Presentation_ for an overview of *Yardstick* + + +About This Document +=================== + +This document consists of the following chapters: + +* Chapter :doc:`02-methodology` describes the methodology implemented by the + Yardstick Project for :term:`NFVI` verification. + +* Chapter :doc:`03-architecture` provides information on the software architecture + of yardstick. + +* Chapter :doc:`04-vtc-overview` provides information on the :term:`VTC`. + +* Chapter :doc:`05-apexlake_installation` provides instructions to install the + experimental framework *ApexLake* + +* Chapter :doc:`06-apexlake_api` explains how this framework is integrated in + *Yardstick*. + +* Chapter :doc:`07-nsb-overview` describes the methodology implemented by the + yardstick - Network service benchmarking to test real world usecase for a + given VNF + +* Chapter :doc:`08-nsb_installation` provides instructions to install + *Yardstick - Network service benchmarking testing*. + +* Chapter :doc:`09-installation` provides instructions to install *Yardstick*. + +* Chapter :doc:`10-yardstick_plugin` provides information on how to integrate + other OPNFV testing projects into *Yardstick*. + +* Chapter :doc:`11-result-store-InfluxDB` provides inforamtion on how to run + plug-in test cases and store test results into community's InfluxDB. + +* Chapter :doc:`12-list-of-tcs` includes a list of available Yardstick test + cases. + + +Contact Yardstick +================= + +Feedback? `Contact us`_ + +.. _Contact us: opnfv-users@lists.opnfv.org + diff --git a/docs/testing/user/userguide/02-methodology.rst b/docs/testing/user/userguide/02-methodology.rst new file mode 100644 index 000000000..34d271095 --- /dev/null +++ b/docs/testing/user/userguide/02-methodology.rst @@ -0,0 +1,195 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Ericsson AB and others. + +=========== +Methodology +=========== + +Abstract +======== + +This chapter describes the methodology implemented by the Yardstick project for +verifying the :term:`NFVI` from the perspective of a :term:`VNF`. + +ETSI-NFV +======== + +.. _NFV-TST001: http://www.etsi.org/deliver/etsi_gs/NFV-TST/001_099/001/01.01.01_60/gs_NFV-TST001v010101p.pdf +.. _Yardsticktst: https://wiki.opnfv.org/download/attachments/2925202/opnfv_summit_-_bridging_opnfv_and_etsi.pdf?version=1&modificationDate=1458848320000&api=v2 + + +The document ETSI GS NFV-TST001_, "Pre-deployment Testing; Report on Validation +of NFV Environments and Services", recommends methods for pre-deployment +testing of the functional components of an NFV environment. + +The Yardstick project implements the methodology described in chapter 6, "Pre- +deployment validation of NFV infrastructure". + +The methodology consists in decomposing the typical :term:`VNF` work-load +performance metrics into a number of characteristics/performance vectors, which +each can be represented by distinct test-cases. + +The methodology includes five steps: + +* *Step1:* Define Infrastruture - the Hardware, Software and corresponding + configuration target for validation; the OPNFV infrastructure, in OPNFV + community labs. + +* *Step2:* Identify :term:`VNF` type - the application for which the + infrastructure is to be validated, and its requirements on the underlying + infrastructure. + +* *Step3:* Select test cases - depending on the workload that represents the + application for which the infrastruture is to be validated, the relevant + test cases amongst the list of available Yardstick test cases. + +* *Step4:* Execute tests - define the duration and number of iterations for the + selected test cases, tests runs are automated via OPNFV Jenkins Jobs. + +* *Step5:* Collect results - using the common API for result collection. + +.. seealso:: Yardsticktst_ for material on alignment ETSI TST001 and Yardstick. + +Metrics +======= + +The metrics, as defined by ETSI GS NFV-TST001, are shown in +:ref:`Table1 `, :ref:`Table2 ` and +:ref:`Table3 `. + +In OPNFV Colorado release, generic test cases covering aspects of the listed +metrics are available; further OPNFV releases will provide extended testing of +these metrics. +The view of available Yardstick test cases cross ETSI definitions in +:ref:`Table1 `, :ref:`Table2 ` and :ref:`Table3 ` +is shown in :ref:`Table4 `. +It shall be noticed that the Yardstick test cases are examples, the test +duration and number of iterations are configurable, as are the System Under +Test (SUT) and the attributes (or, in Yardstick nomemclature, the scenario +options). + +.. _table2_1: + +**Table 1 - Performance/Speed Metrics** + ++---------+-------------------------------------------------------------------+ +| Category| Performance/Speed | +| | | ++---------+-------------------------------------------------------------------+ +| Compute | * Latency for random memory access | +| | * Latency for cache read/write operations | +| | * Processing speed (instructions per second) | +| | * Throughput for random memory access (bytes per second) | +| | | ++---------+-------------------------------------------------------------------+ +| Network | * Throughput per NFVI node (frames/byte per second) | +| | * Throughput provided to a VM (frames/byte per second) | +| | * Latency per traffic flow | +| | * Latency between VMs | +| | * Latency between NFVI nodes | +| | * Packet delay variation (jitter) between VMs | +| | * Packet delay variation (jitter) between NFVI nodes | +| | | ++---------+-------------------------------------------------------------------+ +| Storage | * Sequential read/write IOPS | +| | * Random read/write IOPS | +| | * Latency for storage read/write operations | +| | * Throughput for storage read/write operations | +| | | ++---------+-------------------------------------------------------------------+ + +.. _table2_2: + +**Table 2 - Capacity/Scale Metrics** + ++---------+-------------------------------------------------------------------+ +| Category| Capacity/Scale | +| | | ++---------+-------------------------------------------------------------------+ +| Compute | * Number of cores and threads- Available memory size | +| | * Cache size | +| | * Processor utilization (max, average, standard deviation) | +| | * Memory utilization (max, average, standard deviation) | +| | * Cache utilization (max, average, standard deviation) | +| | | ++---------+-------------------------------------------------------------------+ +| Network | * Number of connections | +| | * Number of frames sent/received | +| | * Maximum throughput between VMs (frames/byte per second) | +| | * Maximum throughput between NFVI nodes (frames/byte per second) | +| | * Network utilization (max, average, standard deviation) | +| | * Number of traffic flows | +| | | ++---------+-------------------------------------------------------------------+ +| Storage | * Storage/Disk size | +| | * Capacity allocation (block-based, object-based) | +| | * Block size | +| | * Maximum sequential read/write IOPS | +| | * Maximum random read/write IOPS | +| | * Disk utilization (max, average, standard deviation) | +| | | ++---------+-------------------------------------------------------------------+ + +.. _table2_3: + +**Table 3 - Availability/Reliability Metrics** + ++---------+-------------------------------------------------------------------+ +| Category| Availability/Reliability | +| | | ++---------+-------------------------------------------------------------------+ +| Compute | * Processor availability (Error free processing time) | +| | * Memory availability (Error free memory time) | +| | * Processor mean-time-to-failure | +| | * Memory mean-time-to-failure | +| | * Number of processing faults per second | +| | | ++---------+-------------------------------------------------------------------+ +| Network | * NIC availability (Error free connection time) | +| | * Link availability (Error free transmission time) | +| | * NIC mean-time-to-failure | +| | * Network timeout duration due to link failure | +| | * Frame loss rate | +| | | ++---------+-------------------------------------------------------------------+ +| Storage | * Disk availability (Error free disk access time) | +| | * Disk mean-time-to-failure | +| | * Number of failed storage read/write operations per second | +| | | ++---------+-------------------------------------------------------------------+ + +.. _table2_4: + +**Table 4 - Yardstick Generic Test Cases** + ++---------+-------------------+----------------+------------------------------+ +| Category| Performance/Speed | Capacity/Scale | Availability/Reliability | +| | | | | ++---------+-------------------+----------------+------------------------------+ +| Compute | TC003 [1]_ | TC003 [1]_ | TC013 [1]_ | +| | TC004 | TC004 | TC015 [1]_ | +| | TC010 | TC024 | | +| | TC012 | TC055 | | +| | TC014 | | | +| | TC069 | | | ++---------+-------------------+----------------+------------------------------+ +| Network | TC001 | TC044 | TC016 [1]_ | +| | TC002 | TC073 | TC018 [1]_ | +| | TC009 | TC075 | | +| | TC011 | | | +| | TC042 | | | +| | TC043 | | | ++---------+-------------------+----------------+------------------------------+ +| Storage | TC005 | TC063 | TC017 [1]_ | ++---------+-------------------+----------------+------------------------------+ + +.. note:: The description in this OPNFV document is intended as a reference for + users to understand the scope of the Yardstick Project and the + deliverables of the Yardstick framework. For complete description of + the methodology, please refer to the ETSI document. + +.. rubric:: Footnotes +.. [1] To be included in future deliveries. + diff --git a/docs/testing/user/userguide/03-architecture.rst b/docs/testing/user/userguide/03-architecture.rst new file mode 100755 index 000000000..03bf00f58 --- /dev/null +++ b/docs/testing/user/userguide/03-architecture.rst @@ -0,0 +1,266 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) 2016 Huawei Technologies Co.,Ltd and others + +============ +Architecture +============ + +Abstract +======== +This chapter describes the yardstick framework software architecture. we will introduce it from Use-Case View, +Logical View, Process View and Deployment View. More technical details will be introduced in this chapter. + +Overview +======== + +Architecture overview +--------------------- +Yardstick is mainly written in Python, and test configurations are made +in YAML. Documentation is written in reStructuredText format, i.e. .rst +files. Yardstick is inspired by Rally. Yardstick is intended to run on a +computer with access and credentials to a cloud. The test case is described +in a configuration file given as an argument. + +How it works: the benchmark task configuration file is parsed and converted into +an internal model. The context part of the model is converted into a Heat +template and deployed into a stack. Each scenario is run using a runner, either +serially or in parallel. Each runner runs in its own subprocess executing +commands in a VM using SSH. The output of each scenario is written as json +records to a file or influxdb or http server, we use influxdb as the backend, +the test result will be shown with grafana. + + +Concept +------- +**Benchmark** - assess the relative performance of something + +**Benchmark** configuration file - describes a single test case in yaml format + +**Context** - The set of Cloud resources used by a scenario, such as user +names, image names, affinity rules and network configurations. A context is +converted into a simplified Heat template, which is used to deploy onto the +Openstack environment. + +**Data** - Output produced by running a benchmark, written to a file in json format + +**Runner** - Logic that determines how a test scenario is run and reported, for +example the number of test iterations, input value stepping and test duration. +Predefined runner types exist for re-usage, see `Runner types`_. + +**Scenario** - Type/class of measurement for example Ping, Pktgen, (Iperf, LmBench, ...) + +**SLA** - Relates to what result boundary a test case must meet to pass. For +example a latency limit, amount or ratio of lost packets and so on. Action +based on :term:`SLA` can be configured, either just to log (monitor) or to stop +further testing (assert). The :term:`SLA` criteria is set in the benchmark +configuration file and evaluated by the runner. + + +Runner types +------------ + +There exists several predefined runner types to choose between when designing +a test scenario: + +**Arithmetic:** +Every test run arithmetically steps the specified input value(s) in the +test scenario, adding a value to the previous input value. It is also possible +to combine several input values for the same test case in different +combinations. + +Snippet of an Arithmetic runner configuration: +:: + + + runner: + type: Arithmetic + iterators: + - + name: stride + start: 64 + stop: 128 + step: 64 + +**Duration:** +The test runs for a specific period of time before completed. + +Snippet of a Duration runner configuration: +:: + + + runner: + type: Duration + duration: 30 + +**Sequence:** +The test changes a specified input value to the scenario. The input values +to the sequence are specified in a list in the benchmark configuration file. + +Snippet of a Sequence runner configuration: +:: + + + runner: + type: Sequence + scenario_option_name: packetsize + sequence: + - 100 + - 200 + - 250 + + +**Iteration:** +Tests are run a specified number of times before completed. + +Snippet of an Iteration runner configuration: +:: + + + runner: + type: Iteration + iterations: 2 + + + + +Use-Case View +============= +Yardstick Use-Case View shows two kinds of users. One is the Tester who will +do testing in cloud, the other is the User who is more concerned with test result +and result analyses. + +For testers, they will run a single test case or test case suite to verify +infrastructure compliance or bencnmark their own infrastructure performance. +Test result will be stored by dispatcher module, three kinds of store method +(file, influxdb and http) can be configured. The detail information of +scenarios and runners can be queried with CLI by testers. + +For users, they would check test result with four ways. + +If dispatcher module is configured as file(default), there are two ways to +check test result. One is to get result from yardstick.out ( default path: +/tmp/yardstick.out), the other is to get plot of test result, it will be shown +if users execute command "yardstick-plot". + +If dispatcher module is configured as influxdb, users will check test +result on Grafana which is most commonly used for visualizing time series data. + +If dispatcher module is configured as http, users will check test result +on OPNFV testing dashboard which use MongoDB as backend. + +.. image:: images/Use_case.png + :width: 800px + :alt: Yardstick Use-Case View + +Logical View +============ +Yardstick Logical View describes the most important classes, their +organization, and the most important use-case realizations. + +Main classes: + +**TaskCommands** - "yardstick task" subcommand handler. + +**HeatContext** - Do test yaml file context section model convert to HOT, +deploy and undeploy Openstack heat stack. + +**Runner** - Logic that determines how a test scenario is run and reported. + +**TestScenario** - Type/class of measurement for example Ping, Pktgen, (Iperf, +LmBench, ...) + +**Dispatcher** - Choose user defined way to store test results. + +TaskCommands is the "yardstick task" subcommand's main entry. It takes yaml +file (e.g. test.yaml) as input, and uses HeatContext to convert the yaml +file's context section to HOT. After Openstack heat stack is deployed by +HeatContext with the converted HOT, TaskCommands use Runner to run specified +TestScenario. During first runner initialization, it will create output +process. The output process use Dispatcher to push test results. The Runner +will also create a process to execute TestScenario. And there is a +multiprocessing queue between each runner process and output process, so the +runner process can push the real-time test results to the storage media. +TestScenario is commonly connected with VMs by using ssh. It sets up VMs and +run test measurement scripts through the ssh tunnel. After all TestScenaio +is finished, TaskCommands will undeploy the heat stack. Then the whole test is +finished. + +.. image:: images/Logical_view.png + :width: 800px + :alt: Yardstick Logical View + +Process View (Test execution flow) +================================== +Yardstick process view shows how yardstick runs a test case. Below is the +sequence graph about the test execution flow using heat context, and each +object represents one module in yardstick: + +.. image:: images/test_execution_flow.png + :width: 800px + :alt: Yardstick Process View + +A user wants to do a test with yardstick. He can use the CLI to input the +command to start a task. "TaskCommands" will receive the command and ask +"HeatContext" to parse the context. "HeatContext" will then ask "Model" to +convert the model. After the model is generated, "HeatContext" will inform +"Openstack" to deploy the heat stack by heat template. After "Openstack" +deploys the stack, "HeatContext" will inform "Runner" to run the specific test +case. + +Firstly, "Runner" would ask "TestScenario" to process the specific scenario. +Then "TestScenario" will start to log on the openstack by ssh protocal and +execute the test case on the specified VMs. After the script execution +finishes, "TestScenario" will send a message to inform "Runner". When the +testing job is done, "Runner" will inform "Dispatcher" to output the test +result via file, influxdb or http. After the result is output, "HeatContext" +will call "Openstack" to undeploy the heat stack. Once the stack is +undepoyed, the whole test ends. + +Deployment View +=============== +Yardstick deployment view shows how the yardstick tool can be deployed into the +underlying platform. Generally, yardstick tool is installed on JumpServer(see +`07-installation` for detail installation steps), and JumpServer is +connected with other control/compute servers by networking. Based on this +deployment, yardstick can run the test cases on these hosts, and get the test +result for better showing. + +.. image:: images/Deployment.png + :width: 800px + :alt: Yardstick Deployment View + +Yardstick Directory structure +============================= + +**yardstick/** - Yardstick main directory. + +*ci/* - Used for continuous integration of Yardstick at different PODs and + with support for different installers. + +*docs/* - All documentation is stored here, such as configuration guides, + user guides and Yardstick descriptions. + +*etc/* - Used for test cases requiring specific POD configurations. + +*samples/* - test case samples are stored here, most of all scenario and + feature's samples are shown in this directory. + +*tests/* - Here both Yardstick internal tests (*functional/* and *unit/*) as + well as the test cases run to verify the NFVI (*opnfv/*) are stored. + Also configurations of what to run daily and weekly at the different + PODs is located here. + +*tools/* - Currently contains tools to build image for VMs which are deployed + by Heat. Currently contains how to build the yardstick-trusty-server + image with the different tools that are needed from within the image. + +*plugin/* - Plug-in configuration files are stored here. + +*vTC/* - Contains the files for running the virtual Traffic Classifier tests. + +*yardstick/* - Contains the internals of Yardstick: Runners, Scenario, Contexts, + CLI parsing, keys, plotting tools, dispatcher, plugin + install/remove scripts and so on. + diff --git a/docs/testing/user/userguide/04-vtc-overview.rst b/docs/testing/user/userguide/04-vtc-overview.rst new file mode 100644 index 000000000..82b20cad5 --- /dev/null +++ b/docs/testing/user/userguide/04-vtc-overview.rst @@ -0,0 +1,122 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, National Center of Scientific Research "Demokritos" and others. + +========================== +Virtual Traffic Classifier +========================== + +Abstract +======== + +.. _TNOVA: http://www.t-nova.eu/ +.. _TNOVAresults: http://www.t-nova.eu/results/ +.. _Yardstick: https://wiki.opnfv.org/yardstick + +This chapter provides an overview of the virtual Traffic Classifier, a +contribution to OPNFV Yardstick_ from the EU Project TNOVA_. +Additional documentation is available in TNOVAresults_. + +Overview +======== + +The virtual Traffic Classifier (:term:`VTC`) :term:`VNF`, comprises of a +Virtual Network Function Component (:term:`VNFC`). The :term:`VNFC` contains +both the Traffic Inspection module, and the Traffic forwarding module, needed +to run the :term:`VNF`. The exploitation of Deep Packet Inspection +(:term:`DPI`) methods for traffic classification is built around two basic +assumptions: + +* third parties unaffiliated with either source or recipient are able to +inspect each IP packet’s payload + +* the classifier knows the relevant syntax of each application’s packet +payloads (protocol signatures, data patterns, etc.). + +The proposed :term:`DPI` based approach will only use an indicative, small +number of the initial packets from each flow in order to identify the content +and not inspect each packet. + +In this respect it follows the Packet Based per Flow State (term:`PBFS`). This +method uses a table to track each session based on the 5-tuples (src address, +dest address, src port,dest port, transport protocol) that is maintained for +each flow. + +Concepts +======== + +* *Traffic Inspection*: The process of packet analysis and application +identification of network traffic that passes through the :term:`VTC`. + +* *Traffic Forwarding*: The process of packet forwarding from an incoming +network interface to a pre-defined outgoing network interface. + +* *Traffic Rule Application*: The process of packet tagging, based on a +predefined set of rules. Packet tagging may include e.g. Type of Service +(:term:`ToS`) field modification. + +Architecture +============ + +The Traffic Inspection module is the most computationally intensive component +of the :term:`VNF`. It implements filtering and packet matching algorithms in +order to support the enhanced traffic forwarding capability of the :term:`VNF`. +The component supports a flow table (exploiting hashing algorithms for fast +indexing of flows) and an inspection engine for traffic classification. + +The implementation used for these experiments exploits the nDPI library. +The packet capturing mechanism is implemented using libpcap. When the +:term:`DPI` engine identifies a new flow, the flow register is updated with the +appropriate information and transmitted across the Traffic Forwarding module, +which then applies any required policy updates. + +The Traffic Forwarding moudle is responsible for routing and packet forwarding. +It accepts incoming network traffic, consults the flow table for classification +information for each incoming flow and then applies pre-defined policies +marking e.g. :term:`ToS`/Differentiated Services Code Point (:term:`DSCP`) +multimedia traffic for Quality of Service (:term:`QoS`) enablement on the +forwarded traffic. +It is assumed that the traffic is forwarded using the default policy until it +is identified and new policies are enforced. + +The expected response delay is considered to be negligible, as only a small +number of packets are required to identify each flow. + +Graphical Overview +================== + +.. code-block:: console + + +----------------------------+ + | | + | Virtual Traffic Classifier | + | | + | Analysing/Forwarding | + | ------------> | + | ethA ethB | + | | + +----------------------------+ + | ^ + | | + v | + +----------------------------+ + | | + | Virtual Switch | + | | + +----------------------------+ + +Install +======= + +run the build.sh with root privileges + +Run +=== + +sudo ./pfbridge -a eth1 -b eth2 + +Development Environment +======================= + +Ubuntu 14.04 diff --git a/docs/testing/user/userguide/05-apexlake_installation.rst b/docs/testing/user/userguide/05-apexlake_installation.rst new file mode 100644 index 000000000..d4493e0f8 --- /dev/null +++ b/docs/testing/user/userguide/05-apexlake_installation.rst @@ -0,0 +1,300 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Intel Corporation and others. + + +.. _DPDK: http://dpdk.org/doc/nics +.. _DPDK-pktgen: https://github.com/Pktgen/Pktgen-DPDK/ +.. _SRIOV: https://wiki.openstack.org/wiki/SR-IOV-Passthrough-For-Networking +.. _PORTSEC: https://wiki.openstack.org/wiki/Neutron/ML2PortSecurityExtensionDriver +.. _here: https://wiki.opnfv.org/vtc + + +============================ +Apexlake Installation Guide +============================ + +Abstract +-------- + +ApexLake is a framework that provides automatic execution of experiments and +related data collection to enable a user validate infrastructure from the +perspective of a Virtual Network Function (:term:`VNF`). + +In the context of Yardstick, a virtual Traffic Classifier (:term:`VTC`) network +function is utilized. + + +Framework Hardware Dependencies +=============================== + +In order to run the framework there are some hardware related dependencies for +ApexLake. + +The framework needs to be installed on the same physical node where DPDK-pktgen_ +is installed. + +The installation requires the physical node hosting the packet generator must +have 2 NICs which are DPDK_ compatible. + +The 2 NICs will be connected to the switch where the OpenStack VM +network is managed. + +The switch used must support multicast traffic and :term:`IGMP` snooping. +Further details about the configuration are provided at the following here_. + +The corresponding ports to which the cables are connected need to be configured +as VLAN trunks using two of the VLAN IDs available for Neutron. +Note the VLAN IDs used as they will be required in later configuration steps. + + +Framework Software Dependencies +=============================== +Before starting the framework, a number of dependencies must first be installed. +The following describes the set of instructions to be executed via the Linux +shell in order to install and configure the required dependencies. + +1. Install Dependencies. + +To support the framework dependencies the following packages must be installed. +The example provided is based on Ubuntu and needs to be executed in root mode. + +:: + + apt-get install python-dev + apt-get install python-pip + apt-get install python-mock + apt-get install tcpreplay + apt-get install libpcap-dev + +2. Source OpenStack openrc file. + +:: + + source openrc + +3. Configure Openstack Neutron + +In order to support traffic generation and management by the virtual +Traffic Classifier, the configuration of the port security driver +extension is required for Neutron. + +For further details please follow the following link: PORTSEC_ +This step can be skipped in case the target OpenStack is Juno or Kilo release, +but it is required to support Liberty. +It is therefore required to indicate the release version in the configuration +file located in ./yardstick/vTC/apexlake/apexlake.conf + + +4. Create Two Networks based on VLANs in Neutron. + +To enable network communications between the packet generator and the compute +node, two networks must be created via Neutron and mapped to the VLAN IDs +that were previously used in the configuration of the physical switch. +The following shows the typical set of commands required to configure Neutron +correctly. +The physical switches need to be configured accordingly. + +:: + + VLAN_1=2032 + VLAN_2=2033 + PHYSNET=physnet2 + neutron net-create apexlake_inbound_network \ + --provider:network_type vlan \ + --provider:segmentation_id $VLAN_1 \ + --provider:physical_network $PHYSNET + + neutron subnet-create apexlake_inbound_network \ + 192.168.0.0/24 --name apexlake_inbound_subnet + + neutron net-create apexlake_outbound_network \ + --provider:network_type vlan \ + --provider:segmentation_id $VLAN_2 \ + --provider:physical_network $PHYSNET + + neutron subnet-create apexlake_outbound_network 192.168.1.0/24 \ + --name apexlake_outbound_subnet + + +5. Download Ubuntu Cloud Image and load it on Glance + +The virtual Traffic Classifier is supported on top of Ubuntu 14.04 cloud image. +The image can be downloaded on the local machine and loaded on Glance +using the following commands: + +:: + + wget cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img + glance image-create \ + --name ubuntu1404 \ + --is-public true \ + --disk-format qcow \ + --container-format bare \ + --file trusty-server-cloudimg-amd64-disk1.img + + + +6. Configure the Test Cases + +The VLAN tags must also be included in the test case Yardstick yaml file +as parameters for the following test cases: + + * :doc:`opnfv_yardstick_tc006` + + * :doc:`opnfv_yardstick_tc007` + + * :doc:`opnfv_yardstick_tc020` + + * :doc:`opnfv_yardstick_tc021` + + +Install and Configure DPDK Pktgen ++++++++++++++++++++++++++++++++++ + +Execution of the framework is based on DPDK Pktgen. +If DPDK Pktgen has not installed, it is necessary to download, install, compile +and configure it. +The user can create a directory and download the dpdk packet generator source +code: + +:: + + cd experimental_framework/libraries + mkdir dpdk_pktgen + git clone https://github.com/pktgen/Pktgen-DPDK.git + +For instructions on the installation and configuration of DPDK and DPDK Pktgen +please follow the official DPDK Pktgen README file. +Once the installation is completed, it is necessary to load the DPDK kernel +driver, as follow: + +:: + + insmod uio + insmod DPDK_DIR/x86_64-native-linuxapp-gcc/kmod/igb_uio.ko + +It is necessary to set the configuration file to support the desired Pktgen +configuration. +A description of the required configuration parameters and supporting examples +is provided in the following: + +:: + + [PacketGen] + packet_generator = dpdk_pktgen + + # This is the directory where the packet generator is installed + # (if the user previously installed dpdk-pktgen, + # it is required to provide the director where it is installed). + pktgen_directory = /home/user/software/dpdk_pktgen/dpdk/examples/pktgen/ + + # This is the directory where DPDK is installed + dpdk_directory = /home/user/apexlake/experimental_framework/libraries/Pktgen-DPDK/dpdk/ + + # Name of the dpdk-pktgen program that starts the packet generator + program_name = app/app/x86_64-native-linuxapp-gcc/pktgen + + # DPDK coremask (see DPDK-Pktgen readme) + coremask = 1f + + # DPDK memory channels (see DPDK-Pktgen readme) + memory_channels = 3 + + # Name of the interface of the pktgen to be used to send traffic (vlan_sender) + name_if_1 = p1p1 + + # Name of the interface of the pktgen to be used to receive traffic (vlan_receiver) + name_if_2 = p1p2 + + # PCI bus address correspondent to if_1 + bus_slot_nic_1 = 01:00.0 + + # PCI bus address correspondent to if_2 + bus_slot_nic_2 = 01:00.1 + + +To find the parameters related to names of the NICs and the addresses of the PCI buses +the user may find it useful to run the :term:`DPDK` tool nic_bind as follows: + +:: + + DPDK_DIR/tools/dpdk_nic_bind.py --status + +Lists the NICs available on the system, and shows the available drivers and bus addresses for each interface. +Please make sure to select NICs which are :term:`DPDK` compatible. + +Installation and Configuration of smcroute +++++++++++++++++++++++++++++++++++++++++++ + +The user is required to install smcroute which is used by the framework to +support multicast communications. + +The following is the list of commands required to download and install smroute. + +:: + + cd ~ + git clone https://github.com/troglobit/smcroute.git + cd smcroute + git reset --hard c3f5c56 + sed -i 's/aclocal-1.11/aclocal/g' ./autogen.sh + sed -i 's/automake-1.11/automake/g' ./autogen.sh + ./autogen.sh + ./configure + make + sudo make install + cd .. + +It is required to do the reset to the specified commit ID. +It is also requires the creation a configuration file using the following +command: + + SMCROUTE_NIC=(name of the nic) + +where name of the nic is the name used previously for the variable "name_if_2". +For example: + +:: + + SMCROUTE_NIC=p1p2 + +Then create the smcroute configuration file /etc/smcroute.conf + +:: + + echo mgroup from $SMCROUTE_NIC group 224.192.16.1 > /etc/smcroute.conf + + +At the end of this procedure it will be necessary to perform the following +actions to add the user to the sudoers: + +:: + + adduser USERNAME sudo + echo "user ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers + + +Experiment using SR-IOV Configuration on the Compute Node ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ + +To enable :term:`SR-IOV` interfaces on the physical NIC of the compute node, a +compatible NIC is required. +NIC configuration depends on model and vendor. After proper configuration to +support :term:`SR-IOV`, a proper configuration of OpenStack is required. +For further information, please refer to the SRIOV_ configuration guide + +Finalize installation the framework on the system +================================================= + +The installation of the framework on the system requires the setup of the project. +After entering into the apexlake directory, it is sufficient to run the following +command. + +:: + + python setup.py install + +Since some elements are copied into the /tmp directory (see configuration file) +it could be necessary to repeat this step after a reboot of the host. diff --git a/docs/testing/user/userguide/06-apexlake_api.rst b/docs/testing/user/userguide/06-apexlake_api.rst new file mode 100644 index 000000000..35a1dbe3e --- /dev/null +++ b/docs/testing/user/userguide/06-apexlake_api.rst @@ -0,0 +1,89 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Intel Corporation and others. + + +================================= +Apexlake API Interface Definition +================================= + +Abstract +-------- + +The API interface provided by the framework to enable the execution of test +cases is defined as follows. + + +init +---- + +**static init()** + + Initializes the Framework + + **Returns** None + + +execute_framework +----------------- + +**static execute_framework** (test_cases, + + iterations, + + heat_template, + + heat_template_parameters, + + deployment_configuration, + + openstack_credentials) + + Executes the framework according the specified inputs + + **Parameters** + + - **test_cases** + + Test cases to be run with the workload (dict() of dict()) + + Example: + test_case = dict() + + test_case[’name’] = ‘module.Class’ + + test_case[’params’] = dict() + + test_case[’params’][’throughput’] = ‘1’ + + test_case[’params’][’vlan_sender’] = ‘1000’ + + test_case[’params’][’vlan_receiver’] = ‘1001’ + + test_cases = [test_case] + + - **iterations** + Number of test cycles to be executed (int) + + - **heat_template** + (string) File name of the heat template corresponding to the workload to be deployed. + It contains the parameters to be evaluated in the form of #parameter_name. + (See heat_templates/vTC.yaml as example). + + - **heat_template_parameters** + (dict) Parameters to be provided as input to the + heat template. See http://docs.openstack.org/developer/heat/ template_guide/hot_guide.html + section “Template input parameters” for further info. + + - **deployment_configuration** + ( dict[string] = list(strings) ) ) Dictionary of parameters + representing the deployment configuration of the workload. + + The key is a string corresponding to the name of the parameter, + the value is a list of strings representing the value to be + assumed by a specific param. The parameters are user defined: + they have to correspond to the place holders (#parameter_name) + specified in the heat template. + + **Returns** dict() containing results diff --git a/docs/testing/user/userguide/07-nsb-overview.rst b/docs/testing/user/userguide/07-nsb-overview.rst new file mode 100644 index 000000000..19719f1a7 --- /dev/null +++ b/docs/testing/user/userguide/07-nsb-overview.rst @@ -0,0 +1,177 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, 2016-2017 Intel Corporation. + +===================================== +Network Services Benchmarking (NSB) +===================================== + +Abstract +======== + +.. _Yardstick: https://wiki.opnfv.org/yardstick + +This chapter provides an overview of the NSB, a contribution to OPNFV +Yardstick_ from Intel. + +Overview +======== + +GOAL: Extend Yardstick to perform real world VNFs and NFVi Characterization and +benchmarking with repeatable and deterministic methods. + +The Network Service Benchmarking (NSB) extends the yardstick framework to do +VNF characterization and benchmarking in three different execution +environments viz., bare metal i.e. native Linux environment, standalone virtual +environment and managed virtualized environment (e.g. Open stack etc.). +It also brings in the capability to interact with external traffic generators +both hardware & software based for triggering and validating the traffic +according to user defined profiles. + +NSB extension includes: + • Generic data models of Network Services, based on ETSI specs + • New Standalone context for VNF testing like SRIOV, OVS, OVS-DPDK etc + • Generic VNF configuration models and metrics implemented with Python + classes + • Traffic generator features and traffic profiles + • L1-L3 state-less traffic profiles + • L4-L7 state-full traffic profiles + • Tunneling protocol / network overlay support + • Test case samples + • Ping + • Trex + • vPE,vCGNAT, vFirewall etc - ipv4 throughput, latency etc + • Traffic generators like Trex, ab/nginx, ixia, iperf etc + • KPIs for a given use case: + • System agent support for collecting NFvi KPI. This includes: + o CPU statistic + o Memory BW + o OVS-DPDK Stats + • Network KPIs – eg, inpackets, outpackets, thoughput, latency etc + • VNF KPIs – packet_in, packet_drop, packet_fwd etc + +Architecture +============ +The Network Service (NS) defines a set of Virtual Network Functions (VNF) +connected together using NFV infrastructure. + +The Yardstick NSB extension can support multiple VNFs created by different +vendors including traffic generators. Every VNF being tested has its +own data model. The Network service defines a VNF modelling on base of performed +network functionality. The part of the data model is a set of the configuration +parameters, number of connection points used and flavor including core and +memory amount. + +The ETSI defines a Network Service as a set of configurable VNFs working in +some NFV Infrastructure connecting each other using Virtual Links available +through Connection Points. The ETSI MANO specification defines a set of +management entities called Network Service Descriptors (NSD) and +VNF Descriptors (VNFD) that define real Network Service. The picture below +makes an example how the real Network Operator use-case can map into ETSI +Network service definition + +Network Service framework performs the necessary test steps. It may involve + o Interacting with traffic generator and providing the inputs on traffic + type / packet structure to generate the required traffic as per the + test case. Traffic profiles will be used for this. + o Executing the commands required for the test procedure and analyses the + command output for confirming whether the command got executed correctly + or not. E.g. As per the test case, run the traffic for the given + time period / wait for the necessary time delay + o Verify the test result. + o Validate the traffic flow from SUT + o Fetch the table / data from SUT and verify the value as per the test case + o Upload the logs from SUT onto the Test Harness server + o Read the KPI’s provided by particular VNF + +Components of Network Service +------------------------------ + +* *Models for Network Service benchmarking*: The Network Service benchmarking + requires the proper modelling approach. The NSB provides models using Python + files and defining of NSDs and VNFDs. + +The benchmark control application being a part of OPNFV yardstick can call +that python models to instantiate and configure the VNFs. Depending on +infrastructure type (bare-metal or fully virtualized) that calls could be +made directly or using MANO system. + +* *Traffic generators in NSB*: Any benchmark application requires a set of + traffic generator and traffic profiles defining the method in which traffic + is generated. + +The Network Service benchmarking model extends the Network Service +definition with a set of Traffic Generators (TG) that are treated +same way as other VNFs being a part of benchmarked network service. +Same as other VNFs the traffic generator are instantiated and terminated. + +Every traffic generator has own configuration defined as a traffic profile and +a set of KPIs supported. The python models for TG is extended by specific calls +to listen and generate traffic. + +* *The stateless TREX traffic generator*: The main traffic generator used as + Network Service stimulus is open source TREX tool. + +The TREX tool can generate any kind of stateless traffic. + +.. code-block:: console + + +--------+ +-------+ +--------+ + | | | | | | + | Trex | ---> | VNF | ---> | Trex | + | | | | | | + +--------+ +-------+ +--------+ + +Supported testcases scenarios: +• Correlated UDP traffic using TREX traffic generator and replay VNF. + o using different IMIX configuration like pure voice, pure video traffic etc + o using different number IP flows like 1 flow, 1K, 16K, 64K, 256K, 1M flows + o Using different number of rules configured like 1 rule, 1K, 10K rules + +For UDP correlated traffic following Key Performance Indicators are collected +for every combination of test case parameters: + • RFC2544 throughput for various loss rate defined (1% is a default) + +Graphical Overview +================== + +NSB Testing with yardstick framework facilitate performance testing of various +VNFs provided. + +.. code-block:: console + +-----------+ + | | +-----------+ + | vPE | ->|TGen Port 0| + | TestCase | | +-----------+ + | | | + +-----------+ +------------------+ +-------+ | + | | -- API --> | VNF | <---> + +-----------+ | Yardstick | +-------+ | + | Test Case | --> | NSB Testing | | + +-----------+ | | | + | | | | + | +------------------+ | + +-----------+ | +-----------+ + | Traffic | ->|TGen Port 1| + | patterns | +-----------+ + +-----------+ + Figure 1: Network Service - 2 server configuration + + +Install +======= + +run the nsb_install.sh with root privileges + +Run +=== + +source ~/.bash_profile +cd /yardstick/cmd +sudo -E ./NSBperf.py --vnf vpe --test tc_baremetal_rfc2544_ipv4_1flow_64B.yaml + +Development Environment +======================= + +Ubuntu 14.04, Ubuntu 16.04 diff --git a/docs/testing/user/userguide/08-nsb_installation.rst b/docs/testing/user/userguide/08-nsb_installation.rst new file mode 100644 index 000000000..a390bb7d7 --- /dev/null +++ b/docs/testing/user/userguide/08-nsb_installation.rst @@ -0,0 +1,253 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, 2016-2017 Intel Corporation. + +Yardstick - NSB Testing -Installation +===================================== + +Abstract +-------- + +Yardstick supports installation on Ubuntu 14.04 or via a Docker image. The +installation procedure on Ubuntu 14.04 or via the docker image are detailed in +the section below. + +The Network Service Benchmarking (NSB) extends the yardstick framework to do +VNF characterization and benchmarking in three different execution +environments viz., bare metal i.e. native Linux environment, standalone virtual +environment and managed virtualized environment (e.g. Open stack etc.). +It also brings in the capability to interact with external traffic generators +both hardware & software based for triggering and validating the traffic +according to user defined profiles. + +The steps needed to run Yardstick with NSB testing are: + +* Install Yardstick (NSB Testing). +* Setup pod.yaml describing Test topology +* Create the test configuration yaml file. +* Run the test case. + + +Prerequisites +------------- + +Refer chapter 08-instalaltion.rst for more information on yardstick +prerequisites + +Several prerequisites are needed for Yardstick(VNF testing): +* Python Modules: pyzmq, pika. +* flex +* bison +* build-essential +* automake +* libtool +* librabbitmq-dev +* rabbitmq-server +* collectd +* intel-cmt-cat + +Installing Yardstick on Ubuntu 14.04 +------------------------------------ + +.. _install-framework: + +You can install Yardstick framework directly on Ubuntu 14.04 or in an Ubuntu +14.04 Docker image. No matter which way you choose to install Yardstick +framework, the following installation steps are identical. + +If you choose to use the Ubuntu 14.04 Docker image, You can pull the Ubuntu +14.04 Docker image from Docker hub: + +:: + + docker pull ubuntu:14.04 + +Installing Yardstick framework +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +Download source code and install python dependencies: + +:: + + git clone https://gerrit.opnfv.org/gerrit/yardstick + cd yardstick + ./nsb_setup.sh + +It will automatically download all the packages needed for NSB Testing setup. + +System Topology: +----------------- + +.. code-block:: console + + +----------+ +----------+ + | | | | + | | (0)----->(0) | Ping/ | + | TG1 | | vPE/ | + | | | 2Trex | + | | (1)<-----(1) | | + +----------+ +----------+ + trafficgen_1 vnf + + +OpenStack parameters and credentials +------------------------------------ + +Environment variables +^^^^^^^^^^^^^^^^^^^^^ +Before running Yardstick (NSB Testing) it is necessary to export traffic +generator libraries. + +:: + source ~/.bash_profile + +Config yardstick conf +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +cp ./etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf + +vi /etc/yardstick/yardstick.conf + +Config yardstick.conf +:: + + [DEFAULT] + debug = True + dispatcher = influxdb + + [dispatcher_influxdb] + timeout = 5 + target = http://{YOUR_IP_HERE}:8086 + db_name = yardstick + username = root + password = root + + [nsb] + trex_path=/opt/nsb_bin/trex/scripts + bin_path=/opt/nsb_bin + + +Config pod.yaml describing Topology +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +Before executing Yardstick test cases, make sure that pod.yaml reflects the +topology and update all the required fields. + +copy /etc/yardstick/nodes/pod.yaml.nsb.example to /etc/yardstick/nodes/pod.yaml + +Config pod.yaml +:: + nodes: + - + name: trafficgen_1 + role: TrafficGen + ip: 1.1.1.1 + user: root + password: r00t + interfaces: + xe0: # logical name from topology.yaml and vnfd.yaml + vpci: "0000:07:00.0" + driver: i40e # default kernel driver + dpdk_port_num: 0 + local_ip: "152.16.100.20" + netmask: "255.255.255.0" + local_mac: "00:00:00:00:00:01" + xe1: # logical name from topology.yaml and vnfd.yaml + vpci: "0000:07:00.1" + driver: i40e # default kernel driver + dpdk_port_num: 1 + local_ip: "152.16.40.20" + netmask: "255.255.255.0" + local_mac: "00:00.00:00:00:02" + + - + name: vnf + role: vnf + ip: 1.1.1.2 + user: root + password: r00t + host: 1.1.1.2 #BM - host == ip, virtualized env - Host - compute node + interfaces: + xe0: # logical name from topology.yaml and vnfd.yaml + vpci: "0000:07:00.0" + driver: i40e # default kernel driver + dpdk_port_num: 0 + local_ip: "152.16.100.19" + netmask: "255.255.255.0" + local_mac: "00:00:00:00:00:03" + + xe1: # logical name from topology.yaml and vnfd.yaml + vpci: "0000:07:00.1" + driver: i40e # default kernel driver + dpdk_port_num: 1 + local_ip: "152.16.40.19" + netmask: "255.255.255.0" + local_mac: "00:00:00:00:00:04" + routing_table: + - network: "152.16.100.20" + netmask: "255.255.255.0" + gateway: "152.16.100.20" + if: "xe0" + - network: "152.16.40.20" + netmask: "255.255.255.0" + gateway: "152.16.40.20" + if: "xe1" + nd_route_tbl: + - network: "0064:ff9b:0:0:0:0:9810:6414" + netmask: "112" + gateway: "0064:ff9b:0:0:0:0:9810:6414" + if: "xe0" + - network: "0064:ff9b:0:0:0:0:9810:2814" + netmask: "112" + gateway: "0064:ff9b:0:0:0:0:9810:2814" + if: "xe1" + +Enable yardstick virtual environment +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +Before executing yardstick test cases, make sure to activate yardstick +python virtual environment + +:: + source /opt/nsb_bin/yardstick_venv/bin/activate + + +Examples and verifying the install +---------------------------------- + +It is recommended to verify that Yardstick was installed successfully +by executing some simple commands and test samples. Before executing yardstick +test cases make sure yardstick flavor and building yardstick-trusty-server +image can be found in glance and openrc file is sourced. Below is an example +invocation of yardstick help command and ping.py test sample: +:: + + yardstick –h + yardstick task start samples/ping.yaml + +Each testing tool supported by Yardstick has a sample configuration file. +These configuration files can be found in the **samples** directory. + +Default location for the output is ``/tmp/yardstick.out``. + + +Run Yardstick - Network Service Testcases +----------------------------------------- + +NS testing - using NSBperf CLI +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +:: + + source /opt/nsb_setup/yardstick_venv/bin/activate + PYTHONPATH: ". ~/.bash_profile" + cd /yardstick/cmd + Execute command: ./NSPerf.py -h + ./NSBperf.py --vnf --test + eg: ./NSBperf.py --vnf vpe --test tc_baremetal_rfc2544_ipv4_1flow_64B.yaml + +NS testing - using yardstick CLI +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +:: + + source /opt/nsb_setup/yardstick_venv/bin/activate + PYTHONPATH: ". ~/.bash_profile" + Go to test case forlder type we want to execute. + e.g. /samples/vnf_samples/nsut// + run: yardstick --debug task start diff --git a/docs/testing/user/userguide/09-installation.rst b/docs/testing/user/userguide/09-installation.rst new file mode 100644 index 000000000..9c2082a27 --- /dev/null +++ b/docs/testing/user/userguide/09-installation.rst @@ -0,0 +1,401 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Ericsson AB, Huawei Technologies Co.,Ltd and others. + +Yardstick Installation +====================== + +Abstract +-------- + +Yardstick supports installation on Ubuntu 14.04 or via a Docker image. The +installation procedure on Ubuntu 14.04 or via the docker image are detailed in +the section below. + +To use Yardstick you should have access to an OpenStack environment, with at +least Nova, Neutron, Glance, Keystone and Heat installed. + +The steps needed to run Yardstick are: + +1. Install Yardstick. +2. Load OpenStack environment variables. +3. Create a Neutron external network. +4. Build Yardstick flavor and a guest image. +5. Load the guest image into the OpenStack environment. +6. Create the test configuration .yaml file. +7. Run the test case. + + +Prerequisites +------------- + +The OPNFV deployment is out of the scope of this document but it can be +found in http://artifacts.opnfv.org/opnfvdocs/colorado/docs/configguide/index.html. +The OPNFV platform is considered as the System Under Test (SUT) in this +document. + +Several prerequisites are needed for Yardstick: + + #. A Jumphost to run Yardstick on + #. A Docker daemon shall be installed on the Jumphost + #. A public/external network created on the SUT + #. Connectivity from the Jumphost to the SUT public/external network + +WARNING: Connectivity from Jumphost is essential and it is of paramount +importance to make sure it is working before even considering to install +and run Yardstick. Make also sure you understand how your networking is +designed to work. + +NOTE: **Jumphost** refers to any server which meets the previous +requirements. Normally it is the same server from where the OPNFV +deployment has been triggered previously. + +NOTE: If your Jumphost is operating behind a company http proxy and/or +Firewall, please consult first the section `Proxy Support`_, towards +the end of this document. The section details some tips/tricks which +*may* be of help in a proxified environment. + + +Installing Yardstick on Ubuntu 14.04 +------------------------------------ + +.. _install-framework: + +You can install Yardstick framework directly on Ubuntu 14.04 or in an Ubuntu +14.04 Docker image. No matter which way you choose to install Yardstick +framework, the following installation steps are identical. + +If you choose to use the Ubuntu 14.04 Docker image, You can pull the Ubuntu +14.04 Docker image from Docker hub: + +:: + + docker pull ubuntu:14.04 + +Installing Yardstick framework +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +Download source code and install python dependencies: + +:: + + git clone https://gerrit.opnfv.org/gerrit/yardstick + cd yardstick + ./install.sh + + +Installing Yardstick using Docker +--------------------------------- + +Yardstick has a Docker image, this Docker image (**Yardstick-stable**) +serves as a replacement for installing the Yardstick framework in a virtual +environment (for example as done in :ref:`install-framework`). +It is recommended to use this Docker image to run Yardstick test. + +Pulling the Yardstick Docker image +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +.. _dockerhub: https://hub.docker.com/r/opnfv/yardstick/ + +Pull the Yardstick Docker image ('opnfv/yardstick') from the public dockerhub +registry under the OPNFV account: [dockerhub_], with the following docker +command:: + + docker pull opnfv/yardstick:stable + +After pulling the Docker image, check that it is available with the +following docker command:: + + [yardsticker@jumphost ~]$ docker images + REPOSITORY TAG IMAGE ID CREATED SIZE + opnfv/yardstick stable a4501714757a 1 day ago 915.4 MB + +Run the Docker image: + +:: + + docker run --privileged=true -it opnfv/yardstick:stable /bin/bash + +In the container the Yardstick repository is located in the /home/opnfv/repos +directory. + + +OpenStack parameters and credentials +------------------------------------ + +Environment variables +^^^^^^^^^^^^^^^^^^^^^ +Before running Yardstick it is necessary to export OpenStack environment variables +from the OpenStack *openrc* file (using the ``source`` command) and export the +external network name ``export EXTERNAL_NETWORK="external-network-name"``, +the default name for the external network is ``net04_ext``. + +Credential environment variables in the *openrc* file have to include at least: + +* OS_AUTH_URL +* OS_USERNAME +* OS_PASSWORD +* OS_TENANT_NAME + +A sample openrc file may look like this: + +* export OS_PASSWORD=console +* export OS_TENANT_NAME=admin +* export OS_AUTH_URL=http://172.16.1.222:35357/v2.0 +* export OS_USERNAME=admin +* export OS_VOLUME_API_VERSION=2 +* export EXTERNAL_NETWORK=net04_ext + + +Yardstick falvor and guest images +--------------------------------- + +Before executing Yardstick test cases, make sure that yardstick guest image and +yardstick flavor are available in OpenStack. +Detailed steps about creating yardstick flavor and building yardstick-trusty-server +image can be found below. + +Yardstick-flavor +^^^^^^^^^^^^^^^^ +Most of the sample test cases in Yardstick are using an OpenStack flavor called +*yardstick-flavor* which deviates from the OpenStack standard m1.tiny flavor by the +disk size - instead of 1GB it has 3GB. Other parameters are the same as in m1.tiny. + +Create yardstick-flavor: + +:: + + nova flavor-create yardstick-flavor 100 512 3 1 + + +.. _guest-image: + +Building a guest image +^^^^^^^^^^^^^^^^^^^^^^ +Most of the sample test cases in Yardstick are using a guest image called +*yardstick-trusty-server* which deviates from an Ubuntu Cloud Server image +containing all the required tools to run test cases supported by Yardstick. +Yardstick has a tool for building this custom image. It is necessary to have +sudo rights to use this tool. + +Also you may need install several additional packages to use this tool, by +follwing the commands below: + +:: + + apt-get update && apt-get install -y \ + qemu-utils \ + kpartx + +This image can be built using the following command while in the directory where +Yardstick is installed (``~/yardstick`` if the framework is installed +by following the commands above): + +:: + + export YARD_IMG_ARCH="amd64" + sudo echo "Defaults env_keep += \"YARD_IMG_ARCH\"" >> /etc/sudoers + sudo ./tools/yardstick-img-modify tools/ubuntu-server-cloudimg-modify.sh + +**Warning:** the script will create files by default in: +``/tmp/workspace/yardstick`` and the files will be owned by root! + +If you are building this guest image in inside a docker container make sure the +container is granted with privilege. + +The created image can be added to OpenStack using the ``glance image-create`` or +via the OpenStack Dashboard. + +Example command: + +:: + + glance --os-image-api-version 1 image-create \ + --name yardstick-image --is-public true \ + --disk-format qcow2 --container-format bare \ + --file /tmp/workspace/yardstick/yardstick-image.img + +Some Yardstick test cases use a Cirros image, you can find one at +http://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-disk.img + + +Automatic flavor and image creation +----------------------------------- +Yardstick has a script for automatic creating yardstick flavor and building +guest images. This script is mainly used in CI, but you can still use it in +your local environment. + +Example command: + +:: + + export YARD_IMG_ARCH="amd64" + sudo echo "Defaults env_keep += \"YARD_IMG_ARCH\"" >> /etc/sudoers + source $YARDSTICK_REPO_DIR/tests/ci/load_images.sh + + +Yardstick default key pair +^^^^^^^^^^^^^^^^^^^^^^^^^^ +Yardstick uses a SSH key pair to connect to the guest image. This key pair can +be found in the ``resources/files`` directory. To run the ``ping-hot.yaml`` test +sample, this key pair needs to be imported to the OpenStack environment. + + +Examples and verifying the install +---------------------------------- + +It is recommended to verify that Yardstick was installed successfully +by executing some simple commands and test samples. Before executing yardstick +test cases make sure yardstick flavor and building yardstick-trusty-server +image can be found in glance and openrc file is sourced. Below is an example +invocation of yardstick help command and ping.py test sample: +:: + + yardstick –h + yardstick task start samples/ping.yaml + +Each testing tool supported by Yardstick has a sample configuration file. +These configuration files can be found in the **samples** directory. + +Default location for the output is ``/tmp/yardstick.out``. + + +Deploy InfluxDB and Grafana locally +------------------------------------ + +.. pull docker images + +Pull docker images + +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +:: + + docker pull tutum/influxdb + docker pull grafana/grafana + +Run influxdb and config +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +Run influxdb +:: + + docker run -d --name influxdb \ + -p 8083:8083 -p 8086:8086 --expose 8090 --expose 8099 \ + tutum/influxdb + docker exec -it influxdb bash + +Config influxdb +:: + + influx + >CREATE USER root WITH PASSWORD 'root' WITH ALL PRIVILEGES + >CREATE DATABASE yardstick; + >use yardstick; + >show MEASUREMENTS; + +Run grafana and config +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +Run grafana +:: + + docker run -d --name grafana -p 3000:3000 grafana/grafana + +Config grafana +:: + + http://{YOUR_IP_HERE}:3000 + log on using admin/admin and config database resource to be {YOUR_IP_HERE}:8086 + +.. image:: images/Grafana_config.png + :width: 800px + :alt: Grafana data source configration + +Config yardstick conf +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +cp ./etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf + +vi /etc/yardstick/yardstick.conf +Config yardstick.conf +:: + + [DEFAULT] + debug = True + dispatcher = influxdb + + [dispatcher_influxdb] + timeout = 5 + target = http://{YOUR_IP_HERE}:8086 + db_name = yardstick + username = root + password = root + +Now you can run yardstick test cases and store the results in influxdb +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + + +Create a test suite for yardstick +------------------------------------ + +A test suite in yardstick is a yaml file which include one or more test cases. +Yardstick is able to support running test suite task, so you can customize you +own test suite and run it in one task. + +"tests/opnfv/test_suites" is where yardstick put ci test-suite. A typical test +suite is like below: + +fuel_test_suite.yaml + +:: + + --- + # Fuel integration test task suite + + schema: "yardstick:suite:0.1" + + name: "fuel_test_suite" + test_cases_dir: "samples/" + test_cases: + - + file_name: ping.yaml + - + file_name: iperf3.yaml + +As you can see, there are two test cases in fuel_test_suite, the syntax is simple +here, you must specify the schema and the name, then you just need to list the +test cases in the tag "test_cases" and also mark their relative directory in the +tag "test_cases_dir". + +Yardstick test suite also support constraints and task args for each test case. +Here is another sample to show this, which is digested from one big test suite. + +os-nosdn-nofeature-ha.yaml + +:: + + --- + + schema: "yardstick:suite:0.1" + + name: "os-nosdn-nofeature-ha" + test_cases_dir: "tests/opnfv/test_cases/" + test_cases: + - + file_name: opnfv_yardstick_tc002.yaml + - + file_name: opnfv_yardstick_tc005.yaml + - + file_name: opnfv_yardstick_tc043.yaml + constraint: + installer: compass + pod: huawei-pod1 + task_args: + huawei-pod1: '{"pod_info": "etc/yardstick/.../pod.yaml", + "host": "node4.LF","target": "node5.LF"}' + +As you can see in test case "opnfv_yardstick_tc043.yaml", there are two tags, "constraint" and +"task_args". "constraint" is where you can specify which installer or pod it can be run in +the ci environment. "task_args" is where you can specify the task arguments for each pod. + +All in all, to create a test suite in yardstick, you just need to create a suite yaml file +and add test cases and constraint or task arguments if necessary. + diff --git a/docs/testing/user/userguide/10-yardstick_plugin.rst b/docs/testing/user/userguide/10-yardstick_plugin.rst new file mode 100644 index 000000000..f16dedd02 --- /dev/null +++ b/docs/testing/user/userguide/10-yardstick_plugin.rst @@ -0,0 +1,144 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Ericsson AB, Huawei Technologies Co.,Ltd and others. + +=================================== +Installing a plug-in into yardstick +=================================== + +Abstract +======== + +Yardstick currently provides a ``plugin`` CLI command to support integration +with other OPNFV testing projects. Below is an example invocation of yardstick +plugin command and Storperf plug-in sample. + + +Installing Storperf into yardstick +================================== + +Storperf is delivered as a Docker container from +https://hub.docker.com/r/opnfv/storperf/tags/. + +There are two possible methods for installation in your environment: + +* Run container on Jump Host +* Run container in a VM + +In this introduction we will install Storperf on Jump Host. + + +Step 0: Environment preparation +>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> + +Running Storperf on Jump Host +Requirements: + +* Docker must be installed +* Jump Host must have access to the OpenStack Controller API +* Jump Host must have internet connectivity for downloading docker image +* Enough floating IPs must be available to match your agent count + +Before installing Storperf into yardstick you need to check your openstack +environment and other dependencies: + +1. Make sure docker is installed. +2. Make sure Keystone, Nova, Neutron, Glance, Heat are installed correctly. +3. Make sure Jump Host have access to the OpenStack Controller API. +4. Make sure Jump Host must have internet connectivity for downloading docker image. +5. You need to know where to get basic openstack Keystone authorization info, such as + OS_PASSWORD, OS_TENANT_NAME, OS_AUTH_URL, OS_USERNAME. +6. To run a Storperf container, you need to have OpenStack Controller environment + variables defined and passed to Storperf container. The best way to do this is to + put environment variables in a "storperf_admin-rc" file. The storperf_admin-rc + should include credential environment variables at least: + +* OS_AUTH_URL +* OS_TENANT_ID +* OS_TENANT_NAME +* OS_PROJECT_NAME +* OS_USERNAME +* OS_PASSWORD +* OS_REGION_NAME + +For this storperf_admin-rc file, during environment preparation a "prepare_storperf_admin-rc.sh" +script can be used to generate it. +:: + + #!/bin/bash + AUTH_URL=${OS_AUTH_URL} + USERNAME=${OS_USERNAME:-admin} + PASSWORD=${OS_PASSWORD:-console} + TENANT_NAME=${OS_TENANT_NAME:-admin} + VOLUME_API_VERSION=${OS_VOLUME_API_VERSION:-2} + PROJECT_NAME=${OS_PROJECT_NAME:-$TENANT_NAME} + TENANT_ID=`keystone tenant-get admin|grep 'id'|awk -F '|' '{print $3}'|sed -e 's/^[[:space:]]*//'` + rm -f ~/storperf_admin-rc + touch ~/storperf_admin-rc + echo "OS_AUTH_URL="$AUTH_URL >> ~/storperf_admin-rc + echo "OS_USERNAME="$USERNAME >> ~/storperf_admin-rc + echo "OS_PASSWORD="$PASSWORD >> ~/storperf_admin-rc + echo "OS_TENANT_NAME="$TENANT_NAME >> ~/storperf_admin-rc + echo "OS_VOLUME_API_VERSION="$VOLUME_API_VERSION >> ~/storperf_admin-rc + echo "OS_PROJECT_NAME="$PROJECT_NAME >> ~/storperf_admin-rc + echo "OS_TENANT_ID="$TENANT_ID >> ~/storperf_admin-rc + + +Step 1: Plug-in configuration file preparation +++++++++++++++++++++++++++++++++++++++++++++++ + +To install a plug-in, first you need to prepare a plug-in configuration file in +YAML format and store it in the "plugin" directory. The plugin configration file +work as the input of yardstick "plugin" command. Below is the Storperf plug-in +configuration file sample: +:: + + --- + # StorPerf plugin configuration file + # Used for integration StorPerf into Yardstick as a plugin + schema: "yardstick:plugin:0.1" + plugins: + name: storperf + deployment: + ip: 192.168.23.2 + user: root + password: root + +In the plug-in configuration file, you need to specify the plug-in name and the +plug-in deployment info, including node ip, node login username and password. +Here the Storperf will be installed on IP 192.168.23.2 which is the Jump Host +in my local environment. + +Step 2: Plug-in install/remove scripts preparation +++++++++++++++++++++++++++++++++++++++++++++++++++ + +Under "yardstick/resource/scripts directory", there are two folders: a "install" +folder and a "remove" folder. You need to store the plug-in install/remove script +in these two folders respectively. + +The detailed installation or remove operation should de defined in these two scripts. +The name of both install and remove scripts should match the plugin-in name that you +specified in the plug-in configuration file. +For example, the install and remove scripts for Storperf are both named to "storperf.bash". + + +Step 3: Install and remove Storperf ++++++++++++++++++++++++++++++++++++ + +To install Storperf, simply execute the following command +:: + + # Install Storperf + yardstick plugin install plugin/storperf.yaml + +removing Storperf from yardstick +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +To remove Storperf, simply execute the following command +:: + + # Remove Storperf + yardstick plugin remove plugin/storperf.yaml + +What yardstick plugin command does is using the username and password to log into the deployment target and then execute the corresponding install or remove script. diff --git a/docs/testing/user/userguide/11-result-store-InfluxDB.rst b/docs/testing/user/userguide/11-result-store-InfluxDB.rst new file mode 100644 index 000000000..a0bb48a80 --- /dev/null +++ b/docs/testing/user/userguide/11-result-store-InfluxDB.rst @@ -0,0 +1,86 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, 2016 Huawei Technologies Co.,Ltd and others. + +============================================== +Store Other Project's Test Results in InfluxDB +============================================== + +Abstract +======== + +.. _Framework: https://wiki.opnfv.org/download/attachments/6827660/wiki.png?version=1&modificationDate=1470298075000&api=v2 + +This chapter illustrates how to run plug-in test cases and store test results +into community's InfluxDB. The framework is shown in Framework_. + + +.. image:: images/InfluxDB_store.png + :width: 800px + :alt: Store Other Project's Test Results in InfluxDB + +Store Storperf Test Results into Community's InfluxDB +===================================================== + +.. _Influxdb: https://git.opnfv.org/cgit/yardstick/tree/yardstick/dispatcher/influxdb.py +.. _Mingjiang: limingjiang@huawei.com +.. _Visual: https://wiki.opnfv.org/download/attachments/6827660/tc074.PNG?version=1&modificationDate=1470298075000&api=v2 +.. _Login: http://testresults.opnfv.org/grafana/login + +As shown in Framework_, there are two ways to store Storperf test results +into community's InfluxDB: + +1. Yardstick asks Storperf to run the test case. After the test case is + completed, Yardstick reads test results via ReST API from Storperf and + posts test data to the influxDB. + +2. Additionally, Storperf can run tests by itself and post the test result + directly to the InfluxDB. The method for posting data directly to influxDB + will be supported in the future. + +Our plan is to support rest-api in D release so that other testing projects can +call the rest-api to use yardstick dispatcher service to push data to yardstick's +influxdb database. + +For now, influxdb only support line protocol, and the json protocol is deprecated. + +Take ping test case for example, the raw_result is json format like this: +:: + + "benchmark": { + "timestamp": 1470315409.868095, + "errors": "", + "data": { + "rtt": { + "ares": 1.125 + } + }, + "sequence": 1 + }, + "runner_id": 2625 + } + +With the help of "influxdb_line_protocol", the json is transform to like below as a line string: +:: + + 'ping,deploy_scenario=unknown,host=athena.demo,installer=unknown,pod_name=unknown, + runner_id=2625,scenarios=Ping,target=ares.demo,task_id=77755f38-1f6a-4667-a7f3- + 301c99963656,version=unknown rtt.ares=1.125 1470315409868094976' + +So, for data output of json format, you just need to transform json into line format and call +influxdb api to post the data into the database. All this function has been implemented in Influxdb_. +If you need support on this, please contact Mingjiang_. +:: + + curl -i -XPOST 'http://104.197.68.199:8086/write?db=yardstick' -- + data-binary 'ping,deploy_scenario=unknown,host=athena.demo,installer=unknown, ...' + +Grafana will be used for visualizing the collected test data, which is shown in Visual_. Grafana +can be accessed by Login_. + + +.. image:: images/results_visualization.png + :width: 800px + :alt: results visualization + diff --git a/docs/testing/user/userguide/12-grafana.rst b/docs/testing/user/userguide/12-grafana.rst new file mode 100644 index 000000000..416857b71 --- /dev/null +++ b/docs/testing/user/userguide/12-grafana.rst @@ -0,0 +1,119 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) 2016 Huawei Technologies Co.,Ltd and others + +================= +Grafana dashboard +================= + + +Abstract +======== + +This chapter describes the Yardstick grafana dashboard. The Yardstick grafana +dashboard can be found here: http://testresults.opnfv.org/grafana/ + + +.. image:: images/login.png + :width: 800px + :alt: Yardstick grafana dashboard + + +Public access +============= + +Yardstick provids a public account for accessing to the dashboard. The username +and password are both set to ‘opnfv’. + + +Testcase dashboard +================== + +For each test case, there is a dedicated dashboard. Shown here is the dashboard +of TC002. + + +.. image:: images/TC002.png + :width: 800px + :alt:TC002 dashboard + +For each test case dashboard. On the top left, we have a dashboard selection, +you can switch to different test cases using this pull-down menu. + +Underneath, we have a pod and scenario selection. +All the pods and scenarios that have ever published test data to the InfluxDB +will be shown here. + +You can check multiple pods or scenarios. + +For each test case, we have a short description and a link to detailed test +case information in Yardstick user guide. + +Underneath, it is the result presentation section. +You can use the time period selection on the top right corner to zoom in or +zoom out the chart. + + +Administration access +===================== + +For a user with administration rights it is easy to update and save any +dashboard configuration. Saved updates immediately take effect and become live. +This may cause issues like: + +- Changes and updates made to the live configuration in Grafana can compromise + existing Grafana content in an unwanted, unpredicted or incompatible way. + Grafana as such is not version controlled, there exists one single Grafana + configuration per dashboard. +- There is a risk several people can disturb each other when doing updates to + the same Grafana dashboard at the same time. + +Any change made by administrator should be careful. + + +Add a dashboard into yardstick grafana +====================================== + +Due to security concern, users that using the public opnfv account are not able +to edit the yardstick grafana directly.It takes a few more steps for a +non-yardstick user to add a custom dashboard into yardstick grafana. + +There are 6 steps to go. + + +.. image:: images/add.png + :width: 800px + :alt: Add a dashboard into yardstick grafana + + +1. You need to build a local influxdb and grafana, so you can do the work + locally. You can refer to How to deploy InfluxDB and Grafana locally wiki + page about how to do this. + +2. Once step one is done, you can fetch the existing grafana dashboard + configuration file from the yardstick repository and import it to your local + grafana. After import is done, you grafana dashboard will be ready to use + just like the community’s dashboard. + +3. The third step is running some test cases to generate test results and + publishing it to your local influxdb. + +4. Now you have some data to visualize in your dashboard. In the fourth step, + it is time to create your own dashboard. You can either modify an existing + dashboard or try to create a new one from scratch. If you choose to modify + an existing dashboard then in the curtain menu of the existing dashboard do + a "Save As..." into a new dashboard copy instance, and then continue doing + all updates and saves within the dashboard copy. + +5. When finished with all Grafana configuration changes in this temporary + dashboard then chose "export" of the updated dashboard copy into a JSON file + and put it up for review in Gerrit, in file /yardstick/dashboard/Yardstick-TCxxx-yyyyyyyyyyyyy. + For instance a typical default name of the file would be "Yardstick-TC001 Copy-1234567891234". + +6. Once you finish your dashboard, the next step is exporting the configuration + file and propose a patch into Yardstick. Yardstick team will review and + merge it into Yardstick repository. After approved review Yardstick team + will do an "import" of the JSON file and also a "save dashboard" as soon as + possible to replace the old live dashboard configuration. + diff --git a/docs/testing/user/userguide/13-list-of-tcs.rst b/docs/testing/user/userguide/13-list-of-tcs.rst new file mode 100644 index 000000000..1b5806cd9 --- /dev/null +++ b/docs/testing/user/userguide/13-list-of-tcs.rst @@ -0,0 +1,129 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Ericsson AB and others. + +==================== +Yardstick Test Cases +==================== + +Abstract +======== + +This chapter lists available Yardstick test cases. +Yardstick test cases are divided in two main categories: + +* *Generic NFVI Test Cases* - Test Cases developed to realize the methodology +described in :doc:`02-methodology` + +* *OPNFV Feature Test Cases* - Test Cases developed to verify one or more +aspect of a feature delivered by an OPNFV Project, including the test cases +developed for the :term:`VTC`. + +Generic NFVI Test Case Descriptions +=================================== + +.. toctree:: + :maxdepth: 1 + + opnfv_yardstick_tc001.rst + opnfv_yardstick_tc002.rst + opnfv_yardstick_tc004.rst + opnfv_yardstick_tc005.rst + opnfv_yardstick_tc008.rst + opnfv_yardstick_tc009.rst + opnfv_yardstick_tc010.rst + opnfv_yardstick_tc011.rst + opnfv_yardstick_tc012.rst + opnfv_yardstick_tc014.rst + opnfv_yardstick_tc024.rst + opnfv_yardstick_tc037.rst + opnfv_yardstick_tc038.rst + opnfv_yardstick_tc042.rst + opnfv_yardstick_tc043.rst + opnfv_yardstick_tc044.rst + opnfv_yardstick_tc055.rst + opnfv_yardstick_tc061.rst + opnfv_yardstick_tc063.rst + opnfv_yardstick_tc069.rst + opnfv_yardstick_tc070.rst + opnfv_yardstick_tc071.rst + opnfv_yardstick_tc072.rst + opnfv_yardstick_tc073.rst + opnfv_yardstick_tc075.rst + opnfv_yardstick_tc076.rst + +OPNFV Feature Test Cases +======================== + +H A +--- + +.. toctree:: + :maxdepth: 1 + + opnfv_yardstick_tc019.rst + opnfv_yardstick_tc025.rst + opnfv_yardstick_tc045.rst + opnfv_yardstick_tc046.rst + opnfv_yardstick_tc047.rst + opnfv_yardstick_tc048.rst + opnfv_yardstick_tc049.rst + opnfv_yardstick_tc050.rst + opnfv_yardstick_tc051.rst + opnfv_yardstick_tc052.rst + opnfv_yardstick_tc053.rst + opnfv_yardstick_tc054.rst + +IPv6 +---- + +.. toctree:: + :maxdepth: 1 + + opnfv_yardstick_tc027.rst + +KVM +--- + +.. toctree:: + :maxdepth: 1 + + opnfv_yardstick_tc028.rst + +Parser +------ + +.. toctree:: + :maxdepth: 1 + + opnfv_yardstick_tc040.rst + + StorPerf +----------- + +.. toctree:: + :maxdepth: 1 + + opnfv_yardstick_tc074.rst + +virtual Traffic Classifier +-------------------------- + +.. toctree:: + :maxdepth: 1 + + opnfv_yardstick_tc006.rst + opnfv_yardstick_tc007.rst + opnfv_yardstick_tc020.rst + opnfv_yardstick_tc021.rst + +Templates +========= + +.. toctree:: + :maxdepth: 1 + + testcase_description_v2_template + Yardstick_task_templates + diff --git a/docs/testing/user/userguide/Yardstick_task_templates.rst b/docs/testing/user/userguide/Yardstick_task_templates.rst new file mode 100755 index 000000000..e8130dd2a --- /dev/null +++ b/docs/testing/user/userguide/Yardstick_task_templates.rst @@ -0,0 +1,160 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Huawei Technologies Co.,Ltd and others. + +Task Template Syntax +==================== + +Basic template syntax +--------------------- +A nice feature of the input task format used in Yardstick is that it supports +the template syntax based on Jinja2. +This turns out to be extremely useful when, say, you have a fixed structure of +your task but you want to parameterize this task in some way. +For example, imagine your input task file (task.yaml) runs a set of Ping +scenarios: + +:: + + # Sample benchmark task config file + # measure network latency using ping + schema: "yardstick:task:0.1" + + scenarios: + - + type: Ping + options: + packetsize: 200 + host: athena.demo + target: ares.demo + + runner: + type: Duration + duration: 60 + interval: 1 + + sla: + max_rtt: 10 + action: monitor + + context: + ... + +Let's say you want to run the same set of scenarios with the same runner/ +context/sla, but you want to try another packetsize to compare the performance. +The most elegant solution is then to turn the packetsize name into a template +variable: + +:: + + # Sample benchmark task config file + # measure network latency using ping + + schema: "yardstick:task:0.1" + scenarios: + - + type: Ping + options: + packetsize: {{packetsize}} + host: athena.demo + target: ares.demo + + runner: + type: Duration + duration: 60 + interval: 1 + + sla: + max_rtt: 10 + action: monitor + + context: + ... + +and then pass the argument value for {{packetsize}} when starting a task with +this configuration file. +Yardstick provides you with different ways to do that: + +1.Pass the argument values directly in the command-line interface (with either +a JSON or YAML dictionary): + +:: + + yardstick task start samples/ping-template.yaml + --task-args'{"packetsize":"200"}' + +2.Refer to a file that specifies the argument values (JSON/YAML): + +:: + + yardstick task start samples/ping-template.yaml --task-args-file args.yaml + +Using the default values +------------------------ +Note that the Jinja2 template syntax allows you to set the default values for +your parameters. +With default values set, your task file will work even if you don't +parameterize it explicitly while starting a task. +The default values should be set using the {% set ... %} clause (task.yaml). +For example: + +:: + + # Sample benchmark task config file + # measure network latency using ping + schema: "yardstick:task:0.1" + {% set packetsize = packetsize or "100" %} + scenarios: + - + type: Ping + options: + packetsize: {{packetsize}} + host: athena.demo + target: ares.demo + + runner: + type: Duration + duration: 60 + interval: 1 + ... + +If you don't pass the value for {{packetsize}} while starting a task, the +default one will be used. + +Advanced templates +------------------ + +Yardstick makes it possible to use all the power of Jinja2 template syntax, +including the mechanism of built-in functions. +As an example, let us make up a task file that will do a block storage +performance test. +The input task file (fio-template.yaml) below uses the Jinja2 for-endfor +construct to accomplish that: + +:: + + #Test block sizes of 4KB, 8KB, 64KB, 1MB + #Test 5 workloads: read, write, randwrite, randread, rw + schema: "yardstick:task:0.1" + + scenarios: + {% for bs in ['4k', '8k', '64k', '1024k' ] %} + {% for rw in ['read', 'write', 'randwrite', 'randread', 'rw' ] %} + - + type: Fio + options: + filename: /home/ubuntu/data.raw + bs: {{bs}} + rw: {{rw}} + ramp_time: 10 + host: fio.demo + runner: + type: Duration + duration: 60 + interval: 60 + + {% endfor %} + {% endfor %} + context + ... diff --git a/docs/testing/user/userguide/comp-intro.rst b/docs/testing/user/userguide/comp-intro.rst new file mode 100644 index 000000000..ee68226ad --- /dev/null +++ b/docs/testing/user/userguide/comp-intro.rst @@ -0,0 +1,37 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Ericsson AB and others. + +========= +Yardstick +========= + +.. _Yardstick: https://wiki.opnfv.org/yardstick +.. _Presentation: https://wiki.opnfv.org/_media/opnfv_summit_-_yardstick_project.pdf +.. _NFV-TST001: https://docbox.etsi.org/ISG/NFV/Open/Drafts/TST001_-_Pre-deployment_Validation/ +.. _Yardsticktst: https://wiki.opnfv.org/_media/opnfv_summit_-_bridging_opnfv_and_etsi.pdf + +The project's goal is to verify infrastructure compliance, from the perspective +of a Virtual Network Function (VNF). + +The Project's scope is the development of a test framework, *Yardstick*, test +cases and test stimuli to enable Network Function Virtualization Infrastructure +(NFVI) verification. + +In OPNFV Brahmaputra release, generic test cases covering aspects of the +metrics in the document ETSI GS NFV-TST001_, "Pre-deployment Testing; Report on +Validation of NFV Environments and Services" are available; further OPNFV +releases will provide extended testing of these metrics. + +The Project also includes a sample VNF, the Virtual Traffic Classifier (VTC) +and its experimental framework, *ApexLake*. + +*Yardstick* is used in OPNFV for verifying the OPNFV infrastructure and some of +the OPNFV features. The *Yardstick* framework is deployed in several OPNFV +community labs. It is *installer*, *infrastructure* and *application* +independent. + + +.. seealso:: This Presentation_ for an overview of *Yardstick* and + Yardsticktst_ for material on alignment ETSI TST001 and Yardstick. diff --git a/docs/testing/user/userguide/glossary.rst b/docs/testing/user/userguide/glossary.rst new file mode 100644 index 000000000..f8ff41887 --- /dev/null +++ b/docs/testing/user/userguide/glossary.rst @@ -0,0 +1,65 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Ericsson AB and others. + +======== +Glossary +======== + +.. glossary:: + :sorted: + + API + Application Programming Interface + + DPI + Deep Packet Inspection + + DPDK + Data Plane Development Kit + + DSCP + Differentiated Services Code Point + + IGMP + Internet Group Management Protocol + + IOPS + Input/Output Operations Per Second + + NIC + Network Interface Controller + + PBFS + Packet Based per Flow State + + QoS + Quality of Service + + VLAN + Virtual LAN + + VM + Virtual Machine + + VNF + Virtual Network Function + + VNFC + Virtual Network Function Component + + NFVI + Network Function Virtualization Infrastructure + + SR-IOV + Single Root IO Virtualization + + SUT + System Under Test + + ToS + Type of Service + + VTC + Virtual Traffic Classifier diff --git a/docs/testing/user/userguide/images/Deployment.png b/docs/testing/user/userguide/images/Deployment.png new file mode 100755 index 000000000..aca5670cd Binary files /dev/null and b/docs/testing/user/userguide/images/Deployment.png differ diff --git a/docs/testing/user/userguide/images/Grafana_config.png b/docs/testing/user/userguide/images/Grafana_config.png new file mode 100644 index 000000000..cb63098dc Binary files /dev/null and b/docs/testing/user/userguide/images/Grafana_config.png differ diff --git a/docs/testing/user/userguide/images/InfluxDB_store.png b/docs/testing/user/userguide/images/InfluxDB_store.png new file mode 100644 index 000000000..1770fd255 Binary files /dev/null and b/docs/testing/user/userguide/images/InfluxDB_store.png differ diff --git a/docs/testing/user/userguide/images/Logical_view.png b/docs/testing/user/userguide/images/Logical_view.png new file mode 100644 index 000000000..cdb805448 Binary files /dev/null and b/docs/testing/user/userguide/images/Logical_view.png differ diff --git a/docs/testing/user/userguide/images/TC002.png b/docs/testing/user/userguide/images/TC002.png new file mode 100644 index 000000000..89154efcc Binary files /dev/null and b/docs/testing/user/userguide/images/TC002.png differ diff --git a/docs/testing/user/userguide/images/Use_case.png b/docs/testing/user/userguide/images/Use_case.png new file mode 100644 index 000000000..acd52f526 Binary files /dev/null and b/docs/testing/user/userguide/images/Use_case.png differ diff --git a/docs/testing/user/userguide/images/add.png b/docs/testing/user/userguide/images/add.png new file mode 100644 index 000000000..a88a1b146 Binary files /dev/null and b/docs/testing/user/userguide/images/add.png differ diff --git a/docs/testing/user/userguide/images/login.png b/docs/testing/user/userguide/images/login.png new file mode 100644 index 000000000..045e010e4 Binary files /dev/null and b/docs/testing/user/userguide/images/login.png differ diff --git a/docs/testing/user/userguide/images/results_visualization.png b/docs/testing/user/userguide/images/results_visualization.png new file mode 100644 index 000000000..cd092808b Binary files /dev/null and b/docs/testing/user/userguide/images/results_visualization.png differ diff --git a/docs/testing/user/userguide/images/test_execution_flow.png b/docs/testing/user/userguide/images/test_execution_flow.png new file mode 100644 index 000000000..c20a931a4 Binary files /dev/null and b/docs/testing/user/userguide/images/test_execution_flow.png differ diff --git a/docs/testing/user/userguide/index.rst b/docs/testing/user/userguide/index.rst new file mode 100644 index 000000000..1b963af61 --- /dev/null +++ b/docs/testing/user/userguide/index.rst @@ -0,0 +1,27 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Ericsson AB and others. + +================== +Performance Testing User Guide (Yardstick) +================== + +.. toctree:: + :maxdepth: 2 + + 01-introduction + 02-methodology + 03-architecture + 04-vtc-overview + 05-apexlake_installation + 06-apexlake_api + 07-nsb-overview + 08-nsb_installation + 09-installation + 10-yardstick_plugin + 11-result-store-InfluxDB + 12-grafana + 13-list-of-tcs + glossary + references diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc001.rst b/docs/testing/user/userguide/opnfv_yardstick_tc001.rst new file mode 100644 index 000000000..b53c508a6 --- /dev/null +++ b/docs/testing/user/userguide/opnfv_yardstick_tc001.rst @@ -0,0 +1,133 @@ +s work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Ericsson AB and others. + +************************************* +Yardstick Test Case Description TC001 +************************************* + +.. _pktgen: https://www.kernel.org/doc/Documentation/networking/pktgen.txt + ++-----------------------------------------------------------------------------+ +|Network Performance | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC001_NETWORK PERFORMANCE | +| | | ++--------------+--------------------------------------------------------------+ +|metric | Number of flows and throughput | +| | | ++--------------+--------------------------------------------------------------+ +|test purpose | The purpose of TC001 is to evaluate the IaaS network | +| | performance with regards to flows and throughput, such as if | +| | and how different amounts of flows matter for the throughput | +| | between hosts on different compute blades. Typically e.g. | +| | the performance of a vSwitch depends on the number of flows | +| | running through it. Also performance of other equipment or | +| | entities can depend on the number of flows or the packet | +| | sizes used. | +| | | +| | The purpose is also to be able to spot the trends. | +| | Test results, graphs and similar shall be stored for | +| | comparison reasons and product evolution understanding | +| | between different OPNFV versions and/or configurations. | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | pktgen | +| | | +| | Linux packet generator is a tool to generate packets at very | +| | high speed in the kernel. pktgen is mainly used to drive and | +| | LAN equipment test network. pktgen supports multi threading. | +| | To generate random MAC address, IP address, port number UDP | +| | packets, pktgen uses multiple CPU processors in the | +| | different PCI bus (PCI, PCIe bus) with Gigabit Ethernet | +| | tested (pktgen performance depends on the CPU processing | +| | speed, memory delay, PCI bus speed hardware parameters), | +| | Transmit data rate can be even larger than 10GBit/s. Visible | +| | can satisfy most card test requirements. | +| | | +| | (Pktgen is not always part of a Linux distribution, hence it | +| | needs to be installed. It is part of the Yardstick Docker | +| | image. | +| | As an example see the /yardstick/tools/ directory for how | +| | to generate a Linux image with pktgen included.) | +| | | ++--------------+--------------------------------------------------------------+ +|test | This test case uses Pktgen to generate packet flow between | +|description | two hosts for simulating network workloads on the SUT. | +| | | ++--------------+--------------------------------------------------------------+ +|traffic | An IP table is setup on server to monitor for received | +|profile | packets. | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | file: opnfv_yardstick_tc001.yaml | +| | | +| | Packet size is set to 60 bytes. | +| | Number of ports: 10, 50, 100, 500 and 1000, where each | +| | runs for 20 seconds. The whole sequence is run twice | +| | The client and server are distributed on different hardware. | +| | | +| | For SLA max_ppm is set to 1000. The amount of configured | +| | ports map to between 110 up to 1001000 flows, respectively. | +| | | ++--------------+--------------------------------------------------------------+ +|applicability | Test can be configured with different: | +| | | +| | * packet sizes; | +| | * amount of flows; | +| | * test duration. | +| | | +| | Default values exist. | +| | | +| | SLA (optional): max_ppm: The number of packets per million | +| | packets sent that are acceptable to loose, not received. | +| | | ++--------------+--------------------------------------------------------------+ +|usability | This test case is used for generating high network | +| | throughput to simulate certain workloads on the SUT. Hence | +| | it should work with other test cases. | +| | | ++--------------+--------------------------------------------------------------+ +|references | pktgen_ | +| | | +| | ETSI-NFV-TST001 | +| | | ++--------------+--------------------------------------------------------------+ +|pre-test | The test case image needs to be installed into Glance | +|conditions | with pktgen included in it. | +| | | +| | No POD specific requirements have been identified. | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | Two host VMs are booted, as server and client. | +| | | ++--------------+--------------------------------------------------------------+ +|step 2 | Yardstick is connected with the server VM by using ssh. | +| | 'pktgen_benchmark' bash script is copyied from Jump Host to | +| | the server VM via the ssh tunnel. | +| | | ++--------------+--------------------------------------------------------------+ +|step 3 | An IP table is setup on server to monitor for received | +| | packets. | +| | | ++--------------+--------------------------------------------------------------+ +|step 4 | pktgen is invoked to generate packet flow between two server | +| | and client for simulating network workloads on the SUT. | +| | Results are processed and checked against the SLA. Logs are | +| | produced and stored. | +| | | +| | Result: Logs are stored. | +| | | ++--------------+--------------------------------------------------------------+ +|step 5 | Two host VMs are deleted. | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | Fails only if SLA is not passed, or if there is a test case | +| | execution problem. | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc002.rst b/docs/testing/user/userguide/opnfv_yardstick_tc002.rst new file mode 100644 index 000000000..c98780fd5 --- /dev/null +++ b/docs/testing/user/userguide/opnfv_yardstick_tc002.rst @@ -0,0 +1,126 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Ericsson AB and others. + +************************************* +Yardstick Test Case Description TC002 +************************************* + +.. _cirros-image: https://download.cirros-cloud.net +.. _Ping: https://linux.die.net/man/8/ping + ++-----------------------------------------------------------------------------+ +|Network Latency | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC002_NETWORK LATENCY | +| | | ++--------------+--------------------------------------------------------------+ +|metric | RTT (Round Trip Time) | +| | | ++--------------+--------------------------------------------------------------+ +|test purpose | The purpose of TC002 is to do a basic verification that | +| | network latency is within acceptable boundaries when packets | +| | travel between hosts located on same or different compute | +| | blades. | +| | | +| | The purpose is also to be able to spot the trends. | +| | Test results, graphs and similar shall be stored for | +| | comparison reasons and product evolution understanding | +| | between different OPNFV versions and/or configurations. | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | ping | +| | | +| | Ping is a computer network administration software utility | +| | used to test the reachability of a host on an Internet | +| | Protocol (IP) network. It measures the round-trip time for | +| | packet sent from the originating host to a destination | +| | computer that are echoed back to the source. | +| | | +| | Ping is normally part of any Linux distribution, hence it | +| | doesn't need to be installed. It is also part of the | +| | Yardstick Docker image. | +| | (For example also a Cirros image can be downloaded from | +| | cirros-image_, it includes ping) | +| | | ++--------------+--------------------------------------------------------------+ +|test topology | Ping packets (ICMP protocol's mandatory ECHO_REQUEST | +| | datagram) are sent from host VM to target VM(s) to elicit | +| | ICMP ECHO_RESPONSE. | +| | | +| | For one host VM there can be multiple target VMs. | +| | Host VM and target VM(s) can be on same or different compute | +| | blades. | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | file: opnfv_yardstick_tc002.yaml | +| | | +| | Packet size 100 bytes. Test duration 60 seconds. | +| | One ping each 10 seconds. Test is iterated two times. | +| | SLA RTT is set to maximum 10 ms. | +| | | ++--------------+--------------------------------------------------------------+ +|applicability | This test case can be configured with different: | +| | | +| | * packet sizes; | +| | * burst sizes; | +| | * ping intervals; | +| | * test durations; | +| | * test iterations. | +| | | +| | Default values exist. | +| | | +| | SLA is optional. The SLA in this test case serves as an | +| | example. Considerably lower RTT is expected, and also normal | +| | to achieve in balanced L2 environments. However, to cover | +| | most configurations, both bare metal and fully virtualized | +| | ones, this value should be possible to achieve and | +| | acceptable for black box testing. Many real time | +| | applications start to suffer badly if the RTT time is higher | +| | than this. Some may suffer bad also close to this RTT, while | +| | others may not suffer at all. It is a compromise that may | +| | have to be tuned for different configuration purposes. | +| | | ++--------------+--------------------------------------------------------------+ +|usability | This test case is one of Yardstick's generic test. Thus it | +| | is runnable on most of the scenarios. | +| | | ++--------------+--------------------------------------------------------------+ +|references | Ping_ | +| | | +| | ETSI-NFV-TST001 | +| | | ++--------------+--------------------------------------------------------------+ +|pre-test | The test case image (cirros-image) needs to be installed | +|conditions | into Glance with ping included in it. | +| | | +| | No POD specific requirements have been identified. | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | Two host VMs are booted, as server and client. | +| | | ++--------------+--------------------------------------------------------------+ +|step 2 | Yardstick is connected with the server VM by using ssh. | +| | 'ping_benchmark' bash script is copyied from Jump Host to | +| | the server VM via the ssh tunnel. | +| | | ++--------------+--------------------------------------------------------------+ +|step 3 | Ping is invoked. Ping packets are sent from server VM to | +| | client VM. RTT results are calculated and checked against | +| | the SLA. Logs are produced and stored. | +| | | +| | Result: Logs are stored. | +| | | ++--------------+--------------------------------------------------------------+ +|step 4 | Two host VMs are deleted. | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | Test should not PASS if any RTT is above the optional SLA | +| | value, or if there is a test case execution problem. | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc004.rst b/docs/testing/user/userguide/opnfv_yardstick_tc004.rst new file mode 100644 index 000000000..3554b3826 --- /dev/null +++ b/docs/testing/user/userguide/opnfv_yardstick_tc004.rst @@ -0,0 +1,110 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Huawei Technologies Co.,Ltd and others. + +************************************* +Yardstick Test Case Description TC004 +************************************* + +.. _cachestat: https://github.com/brendangregg/perf-tools/tree/master/fs + ++-----------------------------------------------------------------------------+ +|Cache Utilization | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC004_CACHE Utilization | +| | | ++--------------+--------------------------------------------------------------+ +|metric | cache hit, cache miss, hit/miss ratio, buffer size and page | +| | cache size | +| | | ++--------------+--------------------------------------------------------------+ +|test purpose | The purpose of TC004 is to evaluate the IaaS compute | +| | capability with regards to cache utilization.This test case | +| | should be run in parallel with other Yardstick test cases | +| | and not run as a stand-alone test case. | +| | | +| | This test case measures cache usage statistics, including | +| | cache hit, cache miss, hit ratio, buffer cache size and page | +| | cache size, with some wokloads runing on the infrastructure. | +| | Both average and maximun values are collected. | +| | | +| | The purpose is also to be able to spot the trends. | +| | Test results, graphs and similar shall be stored for | +| | comparison reasons and product evolution understanding | +| | between different OPNFV versions and/or configurations. | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | cachestat | +| | | +| | cachestat is a tool using Linux ftrace capabilities for | +| | showing Linux page cache hit/miss statistics. | +| | | +| | (cachestat is not always part of a Linux distribution, hence | +| | it needs to be installed. As an example see the | +| | /yardstick/tools/ directory for how to generate a Linux | +| | image with cachestat included.) | +| | | ++--------------+--------------------------------------------------------------+ +|test | cachestat test is invoked in a host VM on a compute blade, | +|description | cachestat test requires some other test cases running in the | +| | host to stimulate workload. | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | File: cachestat.yaml (in the 'samples' directory) | +| | | +| | Interval is set 1. Test repeat, pausing every 1 seconds | +| | in-between. | +| | Test durarion is set to 60 seconds. | +| | | +| | SLA is not available in this test case. | +| | | ++--------------+--------------------------------------------------------------+ +|applicability | Test can be configured with different: | +| | | +| | * interval; | +| | * runner Duration. | +| | | +| | Default values exist. | +| | | ++--------------+--------------------------------------------------------------+ +|usability | This test case is one of Yardstick's generic test. Thus it | +| | is runnable on most of the scenarios. | +| | | ++--------------+--------------------------------------------------------------+ +|references | cachestat_ | +| | | +| | ETSI-NFV-TST001 | +| | | ++--------------+--------------------------------------------------------------+ +|pre-test | The test case image needs to be installed into Glance | +|conditions | with cachestat included in the image. | +| | | +| | No POD specific requirements have been identified. | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | A host VM with cachestat installed is booted. | +| | | ++--------------+--------------------------------------------------------------+ +|step 2 | Yardstick is connected with the host VM by using ssh. | +| | 'cache_stat' bash script is copyied from Jump Host to | +| | the server VM via the ssh tunnel. | +| | | ++--------------+--------------------------------------------------------------+ +|step 3 | 'cache_stat' script is invoked. Raw cache usage statistics | +| | are collected and filtrated. Average and maximum values are | +| | calculated and recorded. Logs are produced and stored. | +| | | +| | Result: Logs are stored. | +| | | ++--------------+--------------------------------------------------------------+ +|step 4 | The host VM is deleted. | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | None. Cache utilization results are collected and stored. | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc005.rst b/docs/testing/user/userguide/opnfv_yardstick_tc005.rst new file mode 100644 index 000000000..1c2d71d81 --- /dev/null +++ b/docs/testing/user/userguide/opnfv_yardstick_tc005.rst @@ -0,0 +1,125 @@ +. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Huawei Technologies Co.,Ltd and others. + +************************************* +Yardstick Test Case Description TC005 +************************************* + +.. _fio: http://bluestop.org/files/fio/HOWTO.txt + ++-----------------------------------------------------------------------------+ +|Storage Performance | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC005_STORAGE PERFORMANCE | +| | | ++--------------+--------------------------------------------------------------+ +|metric | IOPS (Average IOs performed per second), | +| | Throughput (Average disk read/write bandwidth rate), | +| | Latency (Average disk read/write latency) | +| | | ++--------------+--------------------------------------------------------------+ +|test purpose | The purpose of TC005 is to evaluate the IaaS storage | +| | performance with regards to IOPS, throughput and latency. | +| | | +| | The purpose is also to be able to spot the trends. | +| | Test results, graphs and similar shall be stored for | +| | comparison reasons and product evolution understanding | +| | between different OPNFV versions and/or configurations. | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | fio | +| | | +| | fio is an I/O tool meant to be used both for benchmark and | +| | stress/hardware verification. It has support for 19 | +| | different types of I/O engines (sync, mmap, libaio, | +| | posixaio, SG v3, splice, null, network, syslet, guasi, | +| | solarisaio, and more), I/O priorities (for newer Linux | +| | kernels), rate I/O, forked or threaded jobs, and much more. | +| | | +| | (fio is not always part of a Linux distribution, hence it | +| | needs to be installed. As an example see the | +| | /yardstick/tools/ directory for how to generate a Linux | +| | image with fio included.) | +| | | ++--------------+--------------------------------------------------------------+ +|test | fio test is invoked in a host VM on a compute blade, a job | +|description | file as well as parameters are passed to fio and fio will | +| | start doing what the job file tells it to do. | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | file: opnfv_yardstick_tc005.yaml | +| | | +| | IO types is set to read, write, randwrite, randread, rw. | +| | IO block size is set to 4KB, 64KB, 1024KB. | +| | fio is run for each IO type and IO block size scheme, | +| | each iteration runs for 30 seconds (10 for ramp time, 20 for | +| | runtime). | +| | | +| | For SLA, minimum read/write iops is set to 100, | +| | minimum read/write throughput is set to 400 KB/s, | +| | and maximum read/write latency is set to 20000 usec. | +| | | ++--------------+--------------------------------------------------------------+ +|applicability | This test case can be configured with different: | +| | | +| | * IO types; | +| | * IO block size; | +| | * IO depth; | +| | * ramp time; | +| | * test duration. | +| | | +| | Default values exist. | +| | | +| | SLA is optional. The SLA in this test case serves as an | +| | example. Considerably higher throughput and lower latency | +| | are expected. However, to cover most configurations, both | +| | baremetal and fully virtualized ones, this value should be | +| | possible to achieve and acceptable for black box testing. | +| | Many heavy IO applications start to suffer badly if the | +| | read/write bandwidths are lower than this. | +| | | ++--------------+--------------------------------------------------------------+ +|usability | This test case is one of Yardstick's generic test. Thus it | +| | is runnable on most of the scenarios. | +| | | ++--------------+--------------------------------------------------------------+ +|references | fio_ | +| | | +| | ETSI-NFV-TST001 | +| | | ++--------------+--------------------------------------------------------------+ +|pre-test | The test case image needs to be installed into Glance | +|conditions | with fio included in it. | +| | | +| | No POD specific requirements have been identified. | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | A host VM with fio installed is booted. | +| | | ++--------------+--------------------------------------------------------------+ +|step 2 | Yardstick is connected with the host VM by using ssh. | +| | 'fio_benchmark' bash script is copyied from Jump Host to | +| | the host VM via the ssh tunnel. | +| | | ++--------------+--------------------------------------------------------------+ +|step 3 | 'fio_benchmark' script is invoked. Simulated IO operations | +| | are started. IOPS, disk read/write bandwidth and latency are | +| | recorded and checked against the SLA. Logs are produced and | +| | stored. | +| | | +| | Result: Logs are stored. | +| | | ++--------------+--------------------------------------------------------------+ +|step 4 | The host VM is deleted. | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | Fails only if SLA is not passed, or if there is a test case | +| | execution problem. | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc006.rst b/docs/testing/user/userguide/opnfv_yardstick_tc006.rst new file mode 100644 index 000000000..2ccb417c1 --- /dev/null +++ b/docs/testing/user/userguide/opnfv_yardstick_tc006.rst @@ -0,0 +1,144 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Intel Corporation and others. + +************************************* +Yardstick Test Case Description TC006 +************************************* + +.. _DPDKpktgen: https://github.com/Pktgen/Pktgen-DPDK/ +.. _rfc2544: https://www.ietf.org/rfc/rfc2544.txt + ++-----------------------------------------------------------------------------+ +|Network Performance | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC006_Virtual Traffic Classifier Data Plane | +| | Throughput Benchmarking Test. | +| | | ++--------------+--------------------------------------------------------------+ +|metric | Throughput | +| | | ++--------------+--------------------------------------------------------------+ +|test purpose | To measure the throughput supported by the virtual Traffic | +| | Classifier according to the RFC2544 methodology for a | +| | user-defined set of vTC deployment configurations. | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | file: file: opnfv_yardstick_tc006.yaml | +| | | +| | packet_size: size of the packets to be used during the | +| | throughput calculation. | +| | Allowe values: [64, 128, 256, 512, 1024, 1280, 1518] | +| | | +| | vnic_type: type of VNIC to be used. | +| | Allowed values are: | +| | - normal: for default OvS port configuration | +| | - direct: for SR-IOV port configuration | +| | Default value: None | +| | | +| | vtc_flavor: OpenStack flavor to be used for the vTC | +| | Default available values are: m1.small, m1.medium, | +| | and m1.large, but the user can create his/her own | +| | flavor and give it as input | +| | Default value: None | +| | | +| | vlan_sender: vlan tag of the network on which the vTC will | +| | receive traffic (VLAN Network 1). | +| | Allowed values: range (1, 4096) | +| | | +| | vlan_receiver: vlan tag of the network on which the vTC | +| | will send traffic back to the packet generator | +| | (VLAN Network 2). | +| | Allowed values: range (1, 4096) | +| | | +| | default_net_name: neutron name of the defaul network that | +| | is used for access to the internet from the vTC | +| | (vNIC 1). | +| | | +| | default_subnet_name: subnet name for vNIC1 | +| | (information available through Neutron). | +| | | +| | vlan_net_1_name: Neutron Name for VLAN Network 1 | +| | (information available through Neutron). | +| | | +| | vlan_subnet_1_name: Subnet Neutron name for VLAN Network 1 | +| | (information available through Neutron). | +| | | +| | vlan_net_2_name: Neutron Name for VLAN Network 2 | +| | (information available through Neutron). | +| | | +| | vlan_subnet_2_name: Subnet Neutron name for VLAN Network 2 | +| | (information available through Neutron). | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | DPDK pktgen | +| | | +| | DPDK Pktgen is not part of a Linux distribution, | +| | hence it needs to be installed by the user. | +| | | ++--------------+--------------------------------------------------------------+ +|references | DPDK Pktgen: DPDKpktgen_ | +| | | +| | ETSI-NFV-TST001 | +| | | +| | RFC 2544: rfc2544_ | +| | | ++--------------+--------------------------------------------------------------+ +|applicability | Test can be configured with different flavors, vNIC type | +| | and packet sizes. Default values exist as specified above. | +| | The vNIC type and flavor MUST be specified by the user. | +| | | ++--------------+--------------------------------------------------------------+ +|pre-test | The vTC has been successfully instantiated and configured. | +| | The user has correctly assigned the values to the deployment | +| | configuration parameters. | +| | | +| | - Multicast traffic MUST be enabled on the network. | +| | The Data network switches need to be configured in | +| | order to manage multicast traffic. | +| | - In the case of SR-IOV vNICs use, SR-IOV compatible NICs | +| | must be used on the compute node. | +| | - Yarsdtick needs to be installed on a host connected to the | +| | data network and the host must have 2 DPDK-compatible | +| | NICs. Proper configuration of DPDK and DPDK pktgen is | +| | required before to run the test case. | +| | (For further instructions please refer to the ApexLake | +| | documentation). | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | Description and expected results | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | The vTC is deployed, according to the user-defined | +| | configuration | +| | | ++--------------+--------------------------------------------------------------+ +|step 2 | The vTC is correctly deployed and configured as necessary | +| | The initialization script has been correctly executed and | +| | vTC is ready to receive and process the traffic. | +| | | ++--------------+--------------------------------------------------------------+ +|step 3 | Test case is executed with the selected parameters: | +| | - vTC flavor | +| | - vNIC type | +| | - packet size | +| | The traffic is sent to the vTC using the maximum available | +| | traffic rate for 60 seconds. | +| | | ++--------------+--------------------------------------------------------------+ +|step 4 | The vTC instance forwards all the packets back to the packet | +| | generator for 60 seconds, as specified by RFC 2544. | +| | | +| | Steps 3 and 4 are executed different times, with different | +| | rates in order to find the maximum supported traffic rate | +| | according to the current definition of throughput in RFC | +| | 2544. | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | The result of the test is a number between 0 and 100 which | +| | represents the throughput in terms of percentage of the | +| | available pktgen NIC bandwidth. | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc007.rst b/docs/testing/user/userguide/opnfv_yardstick_tc007.rst new file mode 100644 index 000000000..87663f816 --- /dev/null +++ b/docs/testing/user/userguide/opnfv_yardstick_tc007.rst @@ -0,0 +1,162 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Intel Corporation and others. + +************************************* +Yardstick Test Case Description TC007 +************************************* + +.. _DPDKpktgen: https://github.com/Pktgen/Pktgen-DPDK/ +.. _rfc2544: https://www.ietf.org/rfc/rfc2544.txt + ++-----------------------------------------------------------------------------+ +|Network Performance | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC007_Virtual Traffic Classifier Data Plane | +| | Throughput Benchmarking Test in Presence of Noisy | +| | neighbours | +| | | ++--------------+--------------------------------------------------------------+ +|metric | Throughput | +| | | ++--------------+--------------------------------------------------------------+ +|test purpose | To measure the throughput supported by the virtual Traffic | +| | Classifier according to the RFC2544 methodology for a | +| | user-defined set of vTC deployment configurations in the | +| | presence of noisy neighbours. | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | file: opnfv_yardstick_tc007.yaml | +| | | +| | packet_size: size of the packets to be used during the | +| | throughput calculation. | +| | Allowe values: [64, 128, 256, 512, 1024, 1280, 1518] | +| | | +| | vnic_type: type of VNIC to be used. | +| | Allowed values are: | +| | - normal: for default OvS port configuration | +| | - direct: for SR-IOV port configuration | +| | | +| | vtc_flavor: OpenStack flavor to be used for the vTC | +| | Default available values are: m1.small, m1.medium, | +| | and m1.large, but the user can create his/her own | +| | flavor and give it as input | +| | | +| | num_of_neighbours: Number of noisy neighbours (VMs) to be | +| | instantiated during the experiment. | +| | Allowed values: range (1, 10) | +| | | +| | amount_of_ram: RAM to be used by each neighbor. | +| | Allowed values: ['250M', '1G', '2G', '3G', '4G', '5G', | +| | '6G', '7G', '8G', '9G', '10G'] | +| | Deault value: 256M | +| | | +| | number_of_cores: Number of noisy neighbours (VMs) to be | +| | instantiated during the experiment. | +| | Allowed values: range (1, 10) | +| | Default value: 1 | +| | | +| | vlan_sender: vlan tag of the network on which the vTC will | +| | receive traffic (VLAN Network 1). | +| | Allowed values: range (1, 4096) | +| | | +| | vlan_receiver: vlan tag of the network on which the vTC | +| | will send traffic back to the packet generator | +| | (VLAN Network 2). | +| | Allowed values: range (1, 4096) | +| | | +| | default_net_name: neutron name of the defaul network that | +| | is used for access to the internet from the vTC | +| | (vNIC 1). | +| | | +| | default_subnet_name: subnet name for vNIC1 | +| | (information available through Neutron). | +| | | +| | vlan_net_1_name: Neutron Name for VLAN Network 1 | +| | (information available through Neutron). | +| | | +| | vlan_subnet_1_name: Subnet Neutron name for VLAN Network 1 | +| | (information available through Neutron). | +| | | +| | vlan_net_2_name: Neutron Name for VLAN Network 2 | +| | (information available through Neutron). | +| | | +| | vlan_subnet_2_name: Subnet Neutron name for VLAN Network 2 | +| | (information available through Neutron). | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | DPDK pktgen | +| | | +| | DPDK Pktgen is not part of a Linux distribution, | +| | hence it needs to be installed by the user. | +| | | ++--------------+--------------------------------------------------------------+ +|references | DPDKpktgen_ | +| | | +| | ETSI-NFV-TST001 | +| | | +| | rfc2544_ | +| | | ++--------------+--------------------------------------------------------------+ +|applicability | Test can be configured with different flavors, vNIC type | +| | and packet sizes. Default values exist as specified above. | +| | The vNIC type and flavor MUST be specified by the user. | +| | | ++--------------+--------------------------------------------------------------+ +|pre-test | The vTC has been successfully instantiated and configured. | +| | The user has correctly assigned the values to the deployment | +| | configuration parameters. | +| | | +| | - Multicast traffic MUST be enabled on the network. | +| | The Data network switches need to be configured in | +| | order to manage multicast traffic. | +| | - In the case of SR-IOV vNICs use, SR-IOV compatible NICs | +| | must be used on the compute node. | +| | - Yarsdtick needs to be installed on a host connected to the | +| | data network and the host must have 2 DPDK-compatible | +| | NICs. Proper configuration of DPDK and DPDK pktgen is | +| | required before to run the test case. | +| | (For further instructions please refer to the ApexLake | +| | documentation). | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | Description and expected results | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | The noisy neighbours are deployed as required by the user. | +| | | ++--------------+--------------------------------------------------------------+ +|step 2 | The vTC is deployed, according to the configuration required | +| | by the user | +| | | ++--------------+--------------------------------------------------------------+ +|step 3 | The vTC is correctly deployed and configured as necessary. | +| | The initialization script has been correctly executed and | +| | the vTC is ready to receive and process the traffic. | +| | | ++--------------+--------------------------------------------------------------+ +|step 4 | Test case is executed with the parameters specified by the | +| | user: | +| | - vTC flavor | +| | - vNIC type | +| | - packet size | +| | The traffic is sent to the vTC using the maximum available | +| | traffic rate | +| | | ++--------------+--------------------------------------------------------------+ +|step 5 | The vTC instance forwards all the packets back to the | +| | packet generator for 60 seconds, as specified by RFC 2544. | +| | | +| | Steps 4 and 5 are executed different times with different | +| | with different traffic rates, in order to find the maximum | +| | supported traffic rate, accoring to the current definition | +| | of throughput in RFC 2544. | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | The result of the test is a number between 0 and 100 which | +| | represents the throughput in terms of percentage of the | +| | available pktgen NIC bandwidth. | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc008.rst b/docs/testing/user/userguide/opnfv_yardstick_tc008.rst new file mode 100644 index 000000000..a4ecaf6ae --- /dev/null +++ b/docs/testing/user/userguide/opnfv_yardstick_tc008.rst @@ -0,0 +1,90 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Ericsson AB and others. + +************************************* +Yardstick Test Case Description TC008 +************************************* + +.. _pktgen: https://www.kernel.org/doc/Documentation/networking/pktgen.txt + ++-----------------------------------------------------------------------------+ +|Packet Loss Extended Test | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC008_NW PERF, Packet loss Extended Test | +| | | ++--------------+--------------------------------------------------------------+ +|metric | Number of flows, packet size and throughput | +| | | ++--------------+--------------------------------------------------------------+ +|test purpose | To evaluate the IaaS network performance with regards to | +| | flows and throughput, such as if and how different amounts | +| | of packet sizes and flows matter for the throughput between | +| | VMs on different compute blades. Typically e.g. the | +| | performance of a vSwitch | +| | depends on the number of flows running through it. Also | +| | performance of other equipment or entities can depend | +| | on the number of flows or the packet sizes used. | +| | The purpose is also to be able to spot trends. Test results, | +| | graphs ans similar shall be stored for comparison reasons and| +| | product evolution understanding between different OPNFV | +| | versions and/or configurations. | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | file: opnfv_yardstick_tc008.yaml | +| | | +| | Packet size: 64, 128, 256, 512, 1024, 1280 and 1518 bytes. | +| | | +| | Number of ports: 1, 10, 50, 100, 500 and 1000. The amount of | +| | configured ports map from 2 up to 1001000 flows, | +| | respectively. Each packet_size/port_amount combination is run| +| | ten times, for 20 seconds each. Then the next | +| | packet_size/port_amount combination is run, and so on. | +| | | +| | The client and server are distributed on different HW. | +| | | +| | For SLA max_ppm is set to 1000. | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | pktgen | +| | | +| | (Pktgen is not always part of a Linux distribution, hence it | +| | needs to be installed. It is part of the Yardstick Docker | +| | image. | +| | As an example see the /yardstick/tools/ directory for how | +| | to generate a Linux image with pktgen included.) | +| | | ++--------------+--------------------------------------------------------------+ +|references | pktgen_ | +| | | +| | ETSI-NFV-TST001 | +| | | ++--------------+--------------------------------------------------------------+ +|applicability | Test can be configured with different packet sizes, amount | +| | of flows and test duration. Default values exist. | +| | | +| | SLA (optional): max_ppm: The number of packets per million | +| | packets sent that are acceptable to loose, not received. | +| | | ++--------------+--------------------------------------------------------------+ +|pre-test | The test case image needs to be installed into Glance | +|conditions | with pktgen included in it. | +| | | +| | No POD specific requirements have been identified. | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | The hosts are installed, as server and client. pktgen is | +| | invoked and logs are produced and stored. | +| | | +| | Result: Logs are stored. | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | Fails only if SLA is not passed, or if there is a test case | +| | execution problem. | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc009.rst b/docs/testing/user/userguide/opnfv_yardstick_tc009.rst new file mode 100644 index 000000000..d6f445361 --- /dev/null +++ b/docs/testing/user/userguide/opnfv_yardstick_tc009.rst @@ -0,0 +1,89 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Ericsson AB and others. + +************************************* +Yardstick Test Case Description TC009 +************************************* + +.. _pktgen: https://www.kernel.org/doc/Documentation/networking/pktgen.txt + ++-----------------------------------------------------------------------------+ +|Packet Loss | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC009_NW PERF, Packet loss | +| | | ++--------------+--------------------------------------------------------------+ +|metric | Number of flows, packets lost and throughput | +| | | ++--------------+--------------------------------------------------------------+ +|test purpose | To evaluate the IaaS network performance with regards to | +| | flows and throughput, such as if and how different amounts | +| | of flows matter for the throughput between VMs on different | +| | compute blades. | +| | Typically e.g. the performance of a vSwitch | +| | depends on the number of flows running through it. Also | +| | performance of other equipment or entities can depend | +| | on the number of flows or the packet sizes used. | +| | The purpose is also to be able to spot trends. Test results, | +| | graphs ans similar shall be stored for comparison reasons and| +| | product evolution understanding between different OPNFV | +| | versions and/or configurations. | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | file: opnfv_yardstick_tc009.yaml | +| | | +| | Packet size: 64 bytes | +| | | +| | Number of ports: 1, 10, 50, 100, 500 and 1000. The amount of | +| | configured ports map from 2 up to 1001000 flows, | +| | respectively. Each port amount is run ten times, for 20 | +| | seconds each. Then the next port_amount is run, and so on. | +| | | +| | The client and server are distributed on different HW. | +| | | +| | For SLA max_ppm is set to 1000. | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | pktgen | +| | | +| | (Pktgen is not always part of a Linux distribution, hence it | +| | needs to be installed. It is part of the Yardstick Docker | +| | image. | +| | As an example see the /yardstick/tools/ directory for how | +| | to generate a Linux image with pktgen included.) | +| | | ++--------------+--------------------------------------------------------------+ +|references | pktgen_ | +| | | +| | ETSI-NFV-TST001 | +| | | ++--------------+--------------------------------------------------------------+ +|applicability | Test can be configured with different packet sizes, amount | +| | of flows and test duration. Default values exist. | +| | | +| | SLA (optional): max_ppm: The number of packets per million | +| | packets sent that are acceptable to loose, not received. | +| | | ++--------------+--------------------------------------------------------------+ +|pre-test | The test case image needs to be installed into Glance | +|conditions | with pktgen included in it. | +| | | +| | No POD specific requirements have been identified. | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | The hosts are installed, as server and client. pktgen is | +| | invoked and logs are produced and stored. | +| | | +| | Result: logs are stored. | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | Fails only if SLA is not passed, or if there is a test case | +| | execution problem. | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc010.rst b/docs/testing/user/userguide/opnfv_yardstick_tc010.rst new file mode 100644 index 000000000..202307de6 --- /dev/null +++ b/docs/testing/user/userguide/opnfv_yardstick_tc010.rst @@ -0,0 +1,154 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Ericsson AB and others. + +************************************* +Yardstick Test Case Description TC010 +************************************* + +.. _lat_mem_rd: http://manpages.ubuntu.com/manpages/trusty/lat_mem_rd.8.html + ++-----------------------------------------------------------------------------+ +|Memory Latency | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC010_MEMORY LATENCY | +| | | ++--------------+--------------------------------------------------------------+ +|metric | Memory read latency (nanoseconds) | +| | | ++--------------+--------------------------------------------------------------+ +|test purpose | The purpose of TC010 is to evaluate the IaaS compute | +| | performance with regards to memory read latency. | +| | It measures the memory read latency for varying memory sizes | +| | and strides. Whole memory hierarchy is measured. | +| | | +| | The purpose is also to be able to spot the trends. | +| | Test results, graphs and similar shall be stored for | +| | comparison reasons and product evolution understanding | +| | between different OPNFV versions and/or configurations. | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | Lmbench | +| | | +| | Lmbench is a suite of operating system microbenchmarks. This | +| | test uses lat_mem_rd tool from that suite including: | +| | * Context switching | +| | * Networking: connection establishment, pipe, TCP, UDP, and | +| | RPC hot potato | +| | * File system creates and deletes | +| | * Process creation | +| | * Signal handling | +| | * System call overhead | +| | * Memory read latency | +| | | +| | (LMbench is not always part of a Linux distribution, hence | +| | it needs to be installed. As an example see the | +| | /yardstick/tools/ directory for how to generate a Linux | +| | image with LMbench included.) | +| | | ++--------------+--------------------------------------------------------------+ +|test | LMbench lat_mem_rd benchmark measures memory read latency | +|description | for varying memory sizes and strides. | +| | | +| | The benchmark runs as two nested loops. The outer loop is | +| | the stride size. The inner loop is the array size. For each | +| | array size, the benchmark creates a ring of pointers that | +| | point backward one stride.Traversing the array is done by: | +| | | +| | p = (char **)*p; | +| | | +| | in a for loop (the over head of the for loop is not | +| | significant; the loop is an unrolled loop 100 loads long). | +| | The size of the array varies from 512 bytes to (typically) | +| | eight megabytes. For the small sizes, the cache will have an | +| | effect, and the loads will be much faster. This becomes much | +| | more apparent when the data is plotted. | +| | | +| | Only data accesses are measured; the instruction cache is | +| | not measured. | +| | | +| | The results are reported in nanoseconds per load and have | +| | been verified accurate to within a few nanoseconds on an SGI | +| | Indy. | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | File: opnfv_yardstick_tc010.yaml | +| | | +| | * SLA (max_latency): 30 nanoseconds | +| | * Stride - 128 bytes | +| | * Stop size - 64 megabytes | +| | * Iterations: 10 - test is run 10 times iteratively. | +| | * Interval: 1 - there is 1 second delay between each | +| | iteration. | +| | | +| | SLA is optional. The SLA in this test case serves as an | +| | example. Considerably lower read latency is expected. | +| | However, to cover most configurations, both baremetal and | +| | fully virtualized ones, this value should be possible to | +| | achieve and acceptable for black box testing. | +| | Many heavy IO applications start to suffer badly if the | +| | read latency is higher than this. | +| | | ++--------------+--------------------------------------------------------------+ +|applicability | Test can be configured with different: | +| | | +| | * strides; | +| | * stop_size; | +| | * iterations and intervals. | +| | | +| | Default values exist. | +| | | +| | SLA (optional) : max_latency: The maximum memory latency | +| | that is accepted. | +| | | ++--------------+--------------------------------------------------------------+ +|usability | This test case is one of Yardstick's generic test. Thus it | +| | is runnable on most of the scenarios. | +| | | ++--------------+--------------------------------------------------------------+ +|references | LMbench lat_mem_rd_ | +| | | +| | ETSI-NFV-TST001 | +| | | ++--------------+--------------------------------------------------------------+ +|pre-test | The test case image needs to be installed into Glance | +|conditions | with Lmbench included in the image. | +| | | +| | No POD specific requirements have been identified. | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | The host is installed as client. LMbench's lat_mem_rd tool | +| | is invoked and logs are produced and stored. | +| | | +| | Result: logs are stored. | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | A host VM with LMbench installed is booted. | +| | | ++--------------+--------------------------------------------------------------+ +|step 2 | Yardstick is connected with the host VM by using ssh. | +| | 'lmbench_latency_benchmark' bash script is copyied from Jump | +| | Host to the host VM via the ssh tunnel. | +| | | ++--------------+--------------------------------------------------------------+ +|step 3 | 'lmbench_latency_benchmark' script is invoked. LMbench's | +| | lat_mem_rd benchmark starts to measures memory read latency | +| | for varying memory sizes and strides. Memory read latency | +| | are recorded and checked against the SLA. Logs are produced | +| | and stored. | +| | | +| | Result: Logs are stored. | +| | | ++--------------+--------------------------------------------------------------+ +|step 4 | The host VM is deleted. | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | Test fails if the measured memory latency is above the SLA | +| | value or if there is a test case execution problem. | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc011.rst b/docs/testing/user/userguide/opnfv_yardstick_tc011.rst new file mode 100644 index 000000000..48bdef497 --- /dev/null +++ b/docs/testing/user/userguide/opnfv_yardstick_tc011.rst @@ -0,0 +1,123 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Huawei Technologies Co.,Ltd and others. + +************************************* +Yardstick Test Case Description TC011 +************************************* + +.. _iperf3: https://iperf.fr/ + ++-----------------------------------------------------------------------------+ +|Packet delay variation between VMs | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC011_PACKET DELAY VARIATION BETWEEN VMs | +| | | ++--------------+--------------------------------------------------------------+ +|metric | jitter: packet delay variation (ms) | +| | | ++--------------+--------------------------------------------------------------+ +|test purpose | The purpose of TC011 is to evaluate the IaaS network | +| | performance with regards to network jitter (packet delay | +| | variation). | +| | It measures the packet delay variation sending the packets | +| | from one VM to the other. | +| | | +| | The purpose is also to be able to spot the trends. | +| | Test results, graphs and similar shall be stored for | +| | comparison reasons and product evolution understanding | +| | between different OPNFV versions and/or configurations. | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | iperf3 | +| | | +| | iPerf3 is a tool for active measurements of the maximum | +| | achievable bandwidth on IP networks. It supports tuning of | +| | various parameters related to timing, buffers and protocols. | +| | The UDP protocols can be used to measure jitter delay. | +| | | +| | (iperf3 is not always part of a Linux distribution, hence it | +| | needs to be installed. It is part of the Yardstick Docker | +| | image. As an example see the /yardstick/tools/ directory for | +| | how to generate a Linux image with pktgen included.) | +| | | ++--------------+--------------------------------------------------------------+ +|test | iperf3 test is invoked between a host VM and a target VM. | +|description | | +| | Jitter calculations are continuously computed by the server, | +| | as specified by RTP in RFC 1889. The client records a 64 bit | +| | second/microsecond timestamp in the packet. The server | +| | computes the relative transit time as (server's receive time | +| | - client's send time). The client's and server's clocks do | +| | not need to be synchronized; any difference is subtracted | +| | outin the jitter calculation. Jitter is the smoothed mean of | +| | differences between consecutive transit times. | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | File: opnfv_yardstick_tc011.yaml | +| | | +| | * options: | +| | protocol: udp # The protocol used by iperf3 tools | +| | bandwidth: 20m # It will send the given number of packets | +| | without pausing | +| | * runner: | +| | duration: 30 # Total test duration 30 seconds. | +| | | +| | * SLA (optional): | +| | jitter: 10 (ms) # The maximum amount of jitter that is | +| | accepted. | +| | | ++--------------+--------------------------------------------------------------+ +|applicability | Test can be configured with different: | +| | | +| | * bandwidth: Test case can be configured with different | +| | bandwidth. | +| | | +| | * duration: The test duration can be configured. | +| | | +| | * jitter: SLA is optional. The SLA in this test case | +| | serves as an example. | +| | | ++--------------+--------------------------------------------------------------+ +|usability | This test case is one of Yardstick's generic test. Thus it | +| | is runnable on most of the scenarios. | +| | | ++--------------+--------------------------------------------------------------+ +|references | iperf3_ | +| | | +| | ETSI-NFV-TST001 | +| | | ++--------------+--------------------------------------------------------------+ +|pre-test | The test case image needs to be installed into Glance | +|conditions | with iperf3 included in the image. | +| | | +| | No POD specific requirements have been identified. | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | Two host VMs with iperf3 installed are booted, as server and | +| | client. | +| | | ++--------------+--------------------------------------------------------------+ +|step 2 | Yardstick is connected with the host VM by using ssh. | +| | A iperf3 server is started on the server VM via the ssh | +| | tunnel. | +| | | ++--------------+--------------------------------------------------------------+ +|step 3 | iperf3 benchmark is invoked. Jitter is calculated and check | +| | against the SLA. Logs are produced and stored. | +| | | +| | Result: Logs are stored. | +| | | ++--------------+--------------------------------------------------------------+ +|step 4 | The host VMs are deleted. | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | Test should not PASS if any jitter is above the optional SLA | +| | value, or if there is a test case execution problem. | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc012.rst b/docs/testing/user/userguide/opnfv_yardstick_tc012.rst new file mode 100644 index 000000000..b56e829f5 --- /dev/null +++ b/docs/testing/user/userguide/opnfv_yardstick_tc012.rst @@ -0,0 +1,135 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Ericsson AB and others. + +************************************* +Yardstick Test Case Description TC012 +************************************* + +.. _bw_mem: http://manpages.ubuntu.com/manpages/trusty/bw_mem.8.html + ++-----------------------------------------------------------------------------+ +|Memory Bandwidth | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC012_MEMORY BANDWIDTH | +| | | ++--------------+--------------------------------------------------------------+ +|metric | Memory read/write bandwidth (MBps) | +| | | ++--------------+--------------------------------------------------------------+ +|test purpose | The purpose of TC012 is to evaluate the IaaS compute | +| | performance with regards to memory throughput. | +| | It measures the rate at which data can be read from and | +| | written to the memory (this includes all levels of memory). | +| | | +| | The purpose is also to be able to spot the trends. | +| | Test results, graphs and similar shall be stored for | +| | comparison reasons and product evolution understanding | +| | between different OPNFV versions and/or configurations. | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | LMbench | +| | | +| | LMbench is a suite of operating system microbenchmarks. | +| | This test uses bw_mem tool from that suite including: | +| | * Cached file read | +| | * Memory copy (bcopy) | +| | * Memory read | +| | * Memory write | +| | * Pipe | +| | * TCP | +| | | +| | (LMbench is not always part of a Linux distribution, hence | +| | it needs to be installed. As an example see the | +| | /yardstick/tools/ directory for how to generate a Linux | +| | image with LMbench included.) | +| | | ++--------------+--------------------------------------------------------------+ +|test | LMbench bw_mem benchmark allocates twice the specified | +|description | amount of memory, zeros it, and then times the copying of | +| | the first half to the second half. The benchmark is invoked | +| | in a host VM on a compute blade. Results are reported in | +| | megabytes moved per second. | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | File: opnfv_yardstick_tc012.yaml | +| | | +| | * SLA (optional): 15000 (MBps) min_bw: The minimum amount of | +| | memory bandwidth that is accepted. | +| | * Size: 10 240 kB - test allocates twice that size | +| | (20 480kB) zeros it and then measures the time it takes to | +| | copy from one side to another. | +| | * Benchmark: rdwr - measures the time to read data into | +| | memory and then write data to the same location. | +| | * Warmup: 0 - the number of iterations to perform before | +| | taking actual measurements. | +| | * Iterations: 10 - test is run 10 times iteratively. | +| | * Interval: 1 - there is 1 second delay between each | +| | iteration. | +| | | +| | SLA is optional. The SLA in this test case serves as an | +| | example. Considerably higher bandwidth is expected. | +| | However, to cover most configurations, both baremetal and | +| | fully virtualized ones, this value should be possible to | +| | achieve and acceptable for black box testing. | +| | Many heavy IO applications start to suffer badly if the | +| | read/write bandwidths are lower than this. | +| | | ++--------------+--------------------------------------------------------------+ +|applicability | Test can be configured with different: | +| | | +| | * memory sizes; | +| | * memory operations (such as rd, wr, rdwr, cp, frd, fwr, | +| | fcp, bzero, bcopy); | +| | * number of warmup iterations; | +| | * iterations and intervals. | +| | | +| | Default values exist. | +| | | +| | SLA (optional) : min_bandwidth: The minimun memory bandwidth | +| | that is accepted. | +| | | ++--------------+--------------------------------------------------------------+ +|usability | This test case is one of Yardstick's generic test. Thus it | +| | is runnable on most of the scenarios. | +| | | ++--------------+--------------------------------------------------------------+ +|references | LMbench bw_mem_ | +| | | +| | ETSI-NFV-TST001 | +| | | ++--------------+--------------------------------------------------------------+ +|pre-test | The test case image needs to be installed into Glance | +|conditions | with Lmbench included in the image. | +| | | +| | No POD specific requirements have been identified. | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | A host VM with LMbench installed is booted. | +| | | ++--------------+--------------------------------------------------------------+ +|step 2 | Yardstick is connected with the host VM by using ssh. | +| | "lmbench_bandwidth_benchmark" bash script is copied from | +| | Jump Host to the host VM via ssh tunnel. | +| | | ++--------------+--------------------------------------------------------------+ +|step 3 | 'lmbench_bandwidth_benchmark' script is invoked. LMbench's | +| | bw_mem benchmark starts to measures memory read/write | +| | bandwidth. Memory read/write bandwidth results are recorded | +| | and checked against the SLA. Logs are produced and stored. | +| | | +| | Result: Logs are stored. | +| | | ++--------------+--------------------------------------------------------------+ +|step 4 | The host VM is deleted. | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | Test fails if the measured memory bandwidth is below the SLA | +| | value or if there is a test case execution problem. | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc014.rst b/docs/testing/user/userguide/opnfv_yardstick_tc014.rst new file mode 100644 index 000000000..1b0d7831a --- /dev/null +++ b/docs/testing/user/userguide/opnfv_yardstick_tc014.rst @@ -0,0 +1,126 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Huawei Technologies Co.,Ltd and others. + +************************************* +Yardstick Test Case Description TC014 +************************************* + +.. _unixbench: https://github.com/kdlucas/byte-unixbench/blob/master/UnixBench + ++-----------------------------------------------------------------------------+ +|Processing speed | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC014_PROCESSING SPEED | +| | | ++--------------+--------------------------------------------------------------+ +|metric | score of single cpu running, | +| | score of parallel running | +| | | ++--------------+--------------------------------------------------------------+ +|test purpose | The purpose of TC014 is to evaluate the IaaS compute | +| | performance with regards to CPU processing speed. | +| | It measures score of single cpu running and parallel | +| | running. | +| | | +| | The purpose is also to be able to spot the trends. | +| | Test results, graphs and similar shall be stored for | +| | comparison reasons and product evolution understanding | +| | between different OPNFV versions and/or configurations. | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | UnixBench | +| | | +| | Unixbench is the most used CPU benchmarking software tool. | +| | It can measure the performance of bash scripts, CPUs in | +| | multithreading and single threading. It can also measure the | +| | performance for parallel taks. Also, specific disk IO for | +| | small and large files are performed. You can use it to | +| | measure either linux dedicated servers and linux vps | +| | servers, running CentOS, Debian, Ubuntu, Fedora and other | +| | distros. | +| | | +| | (UnixBench is not always part of a Linux distribution, hence | +| | it needs to be installed. As an example see the | +| | /yardstick/tools/ directory for how to generate a Linux | +| | image with UnixBench included.) | +| | | ++--------------+--------------------------------------------------------------+ +|test | The UnixBench runs system benchmarks in a host VM on a | +|description | compute blade, getting information on the CPUs in the | +| | system. If the system has more than one CPU, the tests will | +| | be run twice -- once with a single copy of each test running | +| | at once, and once with N copies, where N is the number of | +| | CPUs. | +| | | +| | UnixBench will processs a set of results from a single test | +| | by averaging the individal pass results into a single final | +| | value. | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | file: opnfv_yardstick_tc014.yaml | +| | | +| | run_mode: Run unixbench in quiet mode or verbose mode | +| | test_type: dhry2reg, whetstone and so on | +| | | +| | For SLA with single_score and parallel_score, both can be | +| | set by user, default is NA. | +| | | ++--------------+--------------------------------------------------------------+ +|applicability | Test can be configured with different: | +| | | +| | * test types; | +| | * dhry2reg; | +| | * whetstone. | +| | | +| | Default values exist. | +| | | +| | SLA (optional) : min_score: The minimun UnixBench score that | +| | is accepted. | +| | | ++--------------+--------------------------------------------------------------+ +|usability | This test case is one of Yardstick's generic test. Thus it | +| | is runnable on most of the scenarios. | +| | | ++--------------+--------------------------------------------------------------+ +|references | unixbench_ | +| | | +| | ETSI-NFV-TST001 | +| | | ++--------------+--------------------------------------------------------------+ +|pre-test | The test case image needs to be installed into Glance | +|conditions | with unixbench included in it. | +| | | +| | No POD specific requirements have been identified. | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | A host VM with UnixBench installed is booted. | +| | | ++--------------+--------------------------------------------------------------+ +|step 2 | Yardstick is connected with the host VM by using ssh. | +| | "unixbench_benchmark" bash script is copied from Jump Host | +| | to the host VM via ssh tunnel. | +| | | ++--------------+--------------------------------------------------------------+ +|step 3 | UnixBench is invoked. All the tests are executed using the | +| | "Run" script in the top-level of UnixBench directory. | +| | The "Run" script will run a standard "index" test, and save | +| | the report in the "results" directory. Then the report is | +| | processed by "unixbench_benchmark" and checked againsted the | +| | SLA. | +| | | +| | Result: Logs are stored. | +| | | ++--------------+--------------------------------------------------------------+ +|step 4 | The host VM is deleted. | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | Fails only if SLA is not passed, or if there is a test case | +| | execution problem. | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc019.rst b/docs/testing/user/userguide/opnfv_yardstick_tc019.rst new file mode 100644 index 000000000..1af502253 --- /dev/null +++ b/docs/testing/user/userguide/opnfv_yardstick_tc019.rst @@ -0,0 +1,134 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Huawei Technologies Co.,Ltd and others. + +************************************* +Yardstick Test Case Description TC019 +************************************* + ++-----------------------------------------------------------------------------+ +|Control Node Openstack Service High Availability | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC019_HA: Control node Openstack service down| +| | | ++--------------+--------------------------------------------------------------+ +|test purpose | This test case will verify the high availability of the | +| | service provided by OpenStack (like nova-api, neutro-server) | +| | on control node. | +| | | ++--------------+--------------------------------------------------------------+ +|test method | This test case kills the processes of a specific Openstack | +| | service on a selected control node, then checks whether the | +| | request of the related Openstack command is OK and the killed| +| | processes are recovered. | +| | | ++--------------+--------------------------------------------------------------+ +|attackers | In this test case, an attacker called "kill-process" is | +| | needed. This attacker includes three parameters: | +| | 1) fault_type: which is used for finding the attacker's | +| | scripts. It should be always set to "kill-process" in this | +| | test case. | +| | 2) process_name: which is the process name of the specified | +| | OpenStack service. If there are multiple processes use the | +| | same name on the host, all of them are killed by this | +| | attacker. | +| | 3) host: which is the name of a control node being attacked. | +| | | +| | e.g. | +| | -fault_type: "kill-process" | +| | -process_name: "nova-api" | +| | -host: node1 | +| | | ++--------------+--------------------------------------------------------------+ +|monitors | In this test case, two kinds of monitor are needed: | +| | 1. the "openstack-cmd" monitor constantly request a specific | +| | Openstack command, which needs two parameters: | +| | 1) monitor_type: which is used for finding the monitor class | +| | and related scritps. It should be always set to | +| | "openstack-cmd" for this monitor. | +| | 2) command_name: which is the command name used for request | +| | | +| | 2. the "process" monitor check whether a process is running | +| | on a specific node, which needs three parameters: | +| | 1) monitor_type: which used for finding the monitor class and| +| | related scritps. It should be always set to "process" | +| | for this monitor. | +| | 2) process_name: which is the process name for monitor | +| | 3) host: which is the name of the node runing the process | +| | | +| | e.g. | +| | monitor1: | +| | -monitor_type: "openstack-cmd" | +| | -command_name: "nova image-list" | +| | monitor2: | +| | -monitor_type: "process" | +| | -process_name: "nova-api" | +| | -host: node1 | +| | | ++--------------+--------------------------------------------------------------+ +|metrics | In this test case, there are two metrics: | +| | 1)service_outage_time: which indicates the maximum outage | +| | time (seconds) of the specified Openstack command request. | +| | 2)process_recover_time: which indicates the maximun time | +| | (seconds) from the process being killed to recovered | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | Developed by the project. Please see folder: | +| | "yardstick/benchmark/scenarios/availability/ha_tools" | +| | | ++--------------+--------------------------------------------------------------+ +|references | ETSI NFV REL001 | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | This test case needs two configuration files: | +| | 1) test case file: opnfv_yardstick_tc019.yaml | +| | -Attackers: see above "attackers" discription | +| | -waiting_time: which is the time (seconds) from the process | +| | being killed to stoping monitors the monitors | +| | -Monitors: see above "monitors" discription | +| | -SLA: see above "metrics" discription | +| | | +| | 2)POD file: pod.yaml | +| | The POD configuration should record on pod.yaml first. | +| | the "host" item in this test case will use the node name in | +| | the pod.yaml. | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | start monitors: | +| | each monitor will run with independently process | +| | | +| | Result: The monitor info will be collected. | +| | | ++--------------+--------------------------------------------------------------+ +|step 2 | do attacker: connect the host through SSH, and then execute | +| | the kill process script with param value specified by | +| | "process_name" | +| | | +| | Result: Process will be killed. | +| | | ++--------------+--------------------------------------------------------------+ +|step 3 | stop monitors after a period of time specified by | +| | "waiting_time" | +| | | +| | Result: The monitor info will be aggregated. | +| | | ++--------------+--------------------------------------------------------------+ +|step 4 | verify the SLA | +| | | +| | Result: The test case is passed or not. | +| | | ++--------------+--------------------------------------------------------------+ +|post-action | It is the action when the test cases exist. It will check the| +| | status of the specified process on the host, and restart the | +| | process if it is not running for next test cases | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | Fails only if SLA is not passed, or if there is a test case | +| | execution problem. | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc020.rst b/docs/testing/user/userguide/opnfv_yardstick_tc020.rst new file mode 100644 index 000000000..f2f1d408b --- /dev/null +++ b/docs/testing/user/userguide/opnfv_yardstick_tc020.rst @@ -0,0 +1,141 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Intel Corporation and others. + +************************************* +Yardstick Test Case Description TC020 +************************************* + +.. _DPDKpktgen: https://github.com/Pktgen/Pktgen-DPDK/ +.. _rfc2544: https://www.ietf.org/rfc/rfc2544.txt + ++-----------------------------------------------------------------------------+ +|Network Performance | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC0020_Virtual Traffic Classifier | +| | Instantiation Test | +| | | ++--------------+--------------------------------------------------------------+ +|metric | Failure | +| | | ++--------------+--------------------------------------------------------------+ +|test purpose | To verify that a newly instantiated vTC is 'alive' and | +| | functional and its instantiation is correctly supported by | +| | the infrastructure. | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | file: opnfv_yardstick_tc020.yaml | +| | | +| | vnic_type: type of VNIC to be used. | +| | Allowed values are: | +| | - normal: for default OvS port configuration | +| | - direct: for SR-IOV port configuration | +| | Default value: None | +| | | +| | vtc_flavor: OpenStack flavor to be used for the vTC | +| | Default available values are: m1.small, m1.medium, | +| | and m1.large, but the user can create his/her own | +| | flavor and give it as input | +| | Default value: None | +| | | +| | vlan_sender: vlan tag of the network on which the vTC will | +| | receive traffic (VLAN Network 1). | +| | Allowed values: range (1, 4096) | +| | | +| | vlan_receiver: vlan tag of the network on which the vTC | +| | will send traffic back to the packet generator | +| | (VLAN Network 2). | +| | Allowed values: range (1, 4096) | +| | | +| | default_net_name: neutron name of the defaul network that | +| | is used for access to the internet from the vTC | +| | (vNIC 1). | +| | | +| | default_subnet_name: subnet name for vNIC1 | +| | (information available through Neutron). | +| | | +| | vlan_net_1_name: Neutron Name for VLAN Network 1 | +| | (information available through Neutron). | +| | | +| | vlan_subnet_1_name: Subnet Neutron name for VLAN Network 1 | +| | (information available through Neutron). | +| | | +| | vlan_net_2_name: Neutron Name for VLAN Network 2 | +| | (information available through Neutron). | +| | | +| | vlan_subnet_2_name: Subnet Neutron name for VLAN Network 2 | +| | (information available through Neutron). | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | DPDK pktgen | +| | | +| | DPDK Pktgen is not part of a Linux distribution, | +| | hence it needs to be installed by the user. | +| | | ++--------------+--------------------------------------------------------------+ +|references | DPDKpktgen_ | +| | | +| | ETSI-NFV-TST001 | +| | | +| | rfc2544_ | +| | | ++--------------+--------------------------------------------------------------+ +|applicability | Test can be configured with different flavors, vNIC type | +| | and packet sizes. Default values exist as specified above. | +| | The vNIC type and flavor MUST be specified by the user. | +| | | ++--------------+--------------------------------------------------------------+ +|pre-test | The vTC has been successfully instantiated and configured. | +| | The user has correctly assigned the values to the deployment | +| | configuration parameters. | +| | | +| | - Multicast traffic MUST be enabled on the network. | +| | The Data network switches need to be configured in | +| | order to manage multicast traffic. | +| | Installation and configuration of smcroute is required | +| | before to run the test case. | +| | (For further instructions please refer to the ApexLake | +| | documentation). | +| | - In the case of SR-IOV vNICs use, SR-IOV compatible NICs | +| | must be used on the compute node. | +| | - Yarsdtick needs to be installed on a host connected to the | +| | data network and the host must have 2 DPDK-compatible | +| | NICs. Proper configuration of DPDK and DPDK pktgen is | +| | required before to run the test case. | +| | (For further instructions please refer to the ApexLake | +| | documentation). | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | Description and expected results | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | The vTC is deployed, according to the configuration provided | +| | by the user. | +| | | ++--------------+--------------------------------------------------------------+ +|step 2 | The vTC is correctly deployed and configured as necessary. | +| | The initialization script has been correctly executed and | +| | the vTC is ready to receive and process the traffic. | +| | | ++--------------+--------------------------------------------------------------+ +|step 3 | Test case is executed with the parameters specified by the | +| | the user: | +| | - vTC flavor | +| | - vNIC type | +| | A constant rate traffic is sent to the vTC for 10 seconds. | +| | | ++--------------+--------------------------------------------------------------+ +|step 4 | The vTC instance tags all the packets and sends them back to | +| | the packet generator for 10 seconds. | +| | | +| | The framework checks that the packet generator receives | +| | back all the packets with the correct tag from the vTC. | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | The vTC is deemed to be successfully instantiated if all | +| | packets are sent back with the right tag as requested, | +| | else it is deemed DoA (Dead on arrival) | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc021.rst b/docs/testing/user/userguide/opnfv_yardstick_tc021.rst new file mode 100644 index 000000000..c7adc870a --- /dev/null +++ b/docs/testing/user/userguide/opnfv_yardstick_tc021.rst @@ -0,0 +1,157 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Intel Corporation and others. + +************************************* +Yardstick Test Case Description TC021 +************************************* + +.. _DPDKpktgen: https://github.com/Pktgen/Pktgen-DPDK/ +.. _rfc2544: https://www.ietf.org/rfc/rfc2544.txt + ++-----------------------------------------------------------------------------+ +|Network Performance | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC0021_Virtual Traffic Classifier | +| | Instantiation Test in Presence of Noisy Neighbours | +| | | ++--------------+--------------------------------------------------------------+ +|metric | Failure | +| | | ++--------------+--------------------------------------------------------------+ +|test purpose | To verify that a newly instantiated vTC is 'alive' and | +| | functional and its instantiation is correctly supported by | +| | the infrastructure in the presence of noisy neighbours. | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | file: opnfv_yardstick_tc021.yaml | +| | | +| | vnic_type: type of VNIC to be used. | +| | Allowed values are: | +| | - normal: for default OvS port configuration | +| | - direct: for SR-IOV port configuration | +| | Default value: None | +| | | +| | vtc_flavor: OpenStack flavor to be used for the vTC | +| | Default available values are: m1.small, m1.medium, | +| | and m1.large, but the user can create his/her own | +| | flavor and give it as input | +| | Default value: None | +| | | +| | num_of_neighbours: Number of noisy neighbours (VMs) to be | +| | instantiated during the experiment. | +| | Allowed values: range (1, 10) | +| | | +| | amount_of_ram: RAM to be used by each neighbor. | +| | Allowed values: ['250M', '1G', '2G', '3G', '4G', '5G', | +| | '6G', '7G', '8G', '9G', '10G'] | +| | Deault value: 256M | +| | | +| | number_of_cores: Number of noisy neighbours (VMs) to be | +| | instantiated during the experiment. | +| | Allowed values: range (1, 10) | +| | Default value: 1 | +| | | +| | vlan_sender: vlan tag of the network on which the vTC will | +| | receive traffic (VLAN Network 1). | +| | Allowed values: range (1, 4096) | +| | | +| | vlan_receiver: vlan tag of the network on which the vTC | +| | will send traffic back to the packet generator | +| | (VLAN Network 2). | +| | Allowed values: range (1, 4096) | +| | | +| | default_net_name: neutron name of the defaul network that | +| | is used for access to the internet from the vTC | +| | (vNIC 1). | +| | | +| | default_subnet_name: subnet name for vNIC1 | +| | (information available through Neutron). | +| | | +| | vlan_net_1_name: Neutron Name for VLAN Network 1 | +| | (information available through Neutron). | +| | | +| | vlan_subnet_1_name: Subnet Neutron name for VLAN Network 1 | +| | (information available through Neutron). | +| | | +| | vlan_net_2_name: Neutron Name for VLAN Network 2 | +| | (information available through Neutron). | +| | | +| | vlan_subnet_2_name: Subnet Neutron name for VLAN Network 2 | +| | (information available through Neutron). | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | DPDK pktgen | +| | | +| | DPDK Pktgen is not part of a Linux distribution, | +| | hence it needs to be installed by the user. | +| | | ++--------------+--------------------------------------------------------------+ +|references | DPDK Pktgen: DPDK Pktgen: DPDKpktgen_ | +| | | +| | ETSI-NFV-TST001 | +| | | +| | RFC 2544: rfc2544_ | +| | | ++--------------+--------------------------------------------------------------+ +|applicability | Test can be configured with different flavors, vNIC type | +| | and packet sizes. Default values exist as specified above. | +| | The vNIC type and flavor MUST be specified by the user. | +| | | ++--------------+--------------------------------------------------------------+ +|pre-test | The vTC has been successfully instantiated and configured. | +| | The user has correctly assigned the values to the deployment | +| | configuration parameters. | +| | | +| | - Multicast traffic MUST be enabled on the network. | +| | The Data network switches need to be configured in | +| | order to manage multicast traffic. | +| | Installation and configuration of smcroute is required | +| | before to run the test case. | +| | (For further instructions please refer to the ApexLake | +| | documentation). | +| | - In the case of SR-IOV vNICs use, SR-IOV compatible NICs | +| | must be used on the compute node. | +| | - Yarsdtick needs to be installed on a host connected to the | +| | data network and the host must have 2 DPDK-compatible | +| | NICs. Proper configuration of DPDK and DPDK pktgen is | +| | required before to run the test case. | +| | (For further instructions please refer to the ApexLake | +| | documentation). | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | Description and expected results | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | The noisy neighbours are deployed as required by the user. | +| | | ++--------------+--------------------------------------------------------------+ +|step 2 | The vTC is deployed, according to the configuration provided | +| | by the user. | +| | | ++--------------+--------------------------------------------------------------+ +|step 3 | The vTC is correctly deployed and configured as necessary. | +| | The initialization script has been correctly executed and | +| | the vTC is ready to receive and process the traffic. | +| | | ++--------------+--------------------------------------------------------------+ +|step 4 | Test case is executed with the selected parameters: | +| | - vTC flavor | +| | - vNIC type | +| | A constant rate traffic is sent to the vTC for 10 seconds. | +| | | ++--------------+--------------------------------------------------------------+ +|step 5 | The vTC instance tags all the packets and sends them back to | +| | the packet generator for 10 seconds. | +| | | +| | The framework checks if the packet generator receives back | +| | all the packets with the correct tag from the vTC. | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | The vTC is deemed to be successfully instantiated if all | +| | packets are sent back with the right tag as requested, | +| | else it is deemed DoA (Dead on arrival) | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc024.rst b/docs/testing/user/userguide/opnfv_yardstick_tc024.rst new file mode 100644 index 000000000..8d15e8d2f --- /dev/null +++ b/docs/testing/user/userguide/opnfv_yardstick_tc024.rst @@ -0,0 +1,76 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Ericsson AB and others. + +************************************* +Yardstick Test Case Description TC024 +************************************* + +.. _man-pages: http://manpages.ubuntu.com/manpages/trusty/man1/mpstat.1.html + ++-----------------------------------------------------------------------------+ +| CPU Load | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC024_CPU Load | +| | | ++--------------+--------------------------------------------------------------+ +|metric | CPU load | +| | | ++--------------+--------------------------------------------------------------+ +|test purpose | To evaluate the CPU load performance of the IaaS. This test | +| | case should be run in parallel to other Yardstick test cases | +| | and not run as a stand-alone test case. | +| | Average, minimum and maximun values are obtained. | +| | The purpose is also to be able to spot trends. | +| | Test results, graphs and similar shall be stored for | +| | comparison reasons and product evolution understanding | +| | between different OPNFV versions and/or configurations. | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | file: cpuload.yaml (in the 'samples' directory) | +| | | +| | * interval: 1 - repeat, pausing every 1 seconds in-between. | +| | * count: 10 - display statistics 10 times, then exit. | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | mpstat | +| | | +| | (mpstat is not always part of a Linux distribution, hence it | +| | needs to be installed. It is part of the Yardstick Glance | +| | image. However, if mpstat is not present the TC instead uses | +| | /proc/stats as source to produce "mpstat" output. | +| | | ++--------------+--------------------------------------------------------------+ +|references | man-pages_ | +| | | ++--------------+--------------------------------------------------------------+ +|applicability | Test can be configured with different: | +| | | +| | * interval; | +| | * count; | +| | * runner Iteration and intervals. | +| | | +| | There are default values for each above-mentioned option. | +| | Run in background with other test cases. | +| | | ++--------------+--------------------------------------------------------------+ +|pre-test | The test case image needs to be installed into Glance | +|conditions | with mpstat included in it. | +| | | +| | No POD specific requirements have been identified. | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | The host is installed. The related TC, or TCs, is | +| | invoked and mpstat logs are produced and stored. | +| | | +| | Result: Stored logs | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | None. CPU load results are fetched and stored. | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc025.rst b/docs/testing/user/userguide/opnfv_yardstick_tc025.rst new file mode 100644 index 000000000..0e2e9a5f8 --- /dev/null +++ b/docs/testing/user/userguide/opnfv_yardstick_tc025.rst @@ -0,0 +1,123 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Huawei Technologies Co.,Ltd and others. + +************************************* +Yardstick Test Case Description TC025 +************************************* + ++-----------------------------------------------------------------------------+ +|OpenStack Controller Node abnormally shutdown High Availability | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC025_HA: OpenStack Controller Node | +| | abnormally shutdown | +| | | ++--------------+--------------------------------------------------------------+ +|test purpose | This test case will verify the high availability of | +| | controller node. When one of the controller node abnormally | +| | shutdown, the service provided by it should be OK. | +| | | ++--------------+--------------------------------------------------------------+ +|test method | This test case shutdowns a specified controller node with | +| | some fault injection tools, then checks whether all services | +| | provided by the controller node are OK with some monitor | +| | tools. | +| | | ++--------------+--------------------------------------------------------------+ +|attackers | In this test case, an attacker called "host-shutdown" is | +| | needed. This attacker includes two parameters: | +| | 1) fault_type: which is used for finding the attacker's | +| | scripts. It should be always set to "host-shutdown" in | +| | this test case. | +| | 2) host: the name of a controller node being attacked. | +| | | +| | e.g. | +| | -fault_type: "host-shutdown" | +| | -host: node1 | +| | | ++--------------+--------------------------------------------------------------+ +|monitors | In this test case, one kind of monitor are needed: | +| | 1. the "openstack-cmd" monitor constantly request a specific | +| | Openstack command, which needs two parameters | +| | 1) monitor_type: which is used for finding the monitor class | +| | and related scritps. It should be always set to | +| | "openstack-cmd" for this monitor. | +| | 2) command_name: which is the command name used for request | +| | | +| | There are four instance of the "openstack-cmd" monitor: | +| | monitor1: | +| | -monitor_type: "openstack-cmd" | +| | -api_name: "nova image-list" | +| | monitor2: | +| | -monitor_type: "openstack-cmd" | +| | -api_name: "neutron router-list" | +| | monitor3: | +| | -monitor_type: "openstack-cmd" | +| | -api_name: "heat stack-list" | +| | monitor4: | +| | -monitor_type: "openstack-cmd" | +| | -api_name: "cinder list" | +| | | ++--------------+--------------------------------------------------------------+ +|metrics | In this test case, there is one metric: | +| | 1)service_outage_time: which indicates the maximum outage | +| | time (seconds) of the specified Openstack command request. | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | Developed by the project. Please see folder: | +| | "yardstick/benchmark/scenarios/availability/ha_tools" | +| | | ++--------------+--------------------------------------------------------------+ +|references | ETSI NFV REL001 | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | This test case needs two configuration files: | +| | 1) test case file: opnfv_yardstick_tc019.yaml | +| | -Attackers: see above "attackers" discription | +| | -waiting_time: which is the time (seconds) from the process | +| | being killed to stoping monitors the monitors | +| | -Monitors: see above "monitors" discription | +| | -SLA: see above "metrics" discription | +| | | +| | 2)POD file: pod.yaml | +| | The POD configuration should record on pod.yaml first. | +| | the "host" item in this test case will use the node name in | +| | the pod.yaml. | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | start monitors: | +| | each monitor will run with independently process | +| | | +| | Result: The monitor info will be collected. | +| | | ++--------------+--------------------------------------------------------------+ +|step 2 | do attacker: connect the host through SSH, and then execute | +| | shutdown script on the host | +| | | +| | Result: The host will be shutdown. | +| | | ++--------------+--------------------------------------------------------------+ +|step 3 | stop monitors after a period of time specified by | +| | "waiting_time" | +| | | +| | Result: All monitor result will be aggregated. | +| | | ++--------------+--------------------------------------------------------------+ +|step 4 | verify the SLA | +| | | +| | Result: The test case is passed or not. | +| | | ++--------------+--------------------------------------------------------------+ +|post-action | It is the action when the test cases exist. It restarts the | +| | specified controller node if it is not restarted. | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | Fails only if SLA is not passed, or if there is a test case | +| | execution problem. | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc027.rst b/docs/testing/user/userguide/opnfv_yardstick_tc027.rst new file mode 100644 index 000000000..125fd59fa --- /dev/null +++ b/docs/testing/user/userguide/opnfv_yardstick_tc027.rst @@ -0,0 +1,95 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Huawei Technologies Co.,Ltd and others. + +************************************* +Yardstick Test Case Description TC027 +************************************* + +.. _ipv6: https://wiki.opnfv.org/ipv6_opnfv_project + ++-----------------------------------------------------------------------------+ +|IPv6 connectivity between nodes on the tenant network | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC027_IPv6 connectivity | +| | | ++--------------+--------------------------------------------------------------+ +|metric | RTT, Round Trip Time | +| | | ++--------------+--------------------------------------------------------------+ +|test purpose | To do a basic verification that IPv6 connectivity is within | +| | acceptable boundaries when ipv6 packets travel between hosts | +| | located on same or different compute blades. | +| | The purpose is also to be able to spot trends. Test results, | +| | graphs and similar shall be stored for comparison reasons and| +| | product evolution understanding between different OPNFV | +| | versions and/or configurations. | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | file: opnfv_yardstick_tc027.yaml | +| | | +| | Packet size 56 bytes. | +| | SLA RTT is set to maximum 30 ms. | +| | ipv6 test case can be configured as three independent modules| +| | (setup, run, teardown). if you only want to setup ipv6 | +| | testing environment, do some tests as you want, "run_step" | +| | of task yaml file should be configured as "setup". if you | +| | want to setup and run ping6 testing automatically, "run_step"| +| | should be configured as "setup, run". and if you have had a | +| | environment which has been setup, you only wan to verify the | +| | connectivity of ipv6 network, "run_step" should be "run". Of | +| | course, default is that three modules run sequentially. | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | ping6 | +| | | +| | Ping6 is normally part of Linux distribution, hence it | +| | doesn't need to be installed. | +| | | ++--------------+--------------------------------------------------------------+ +|references | ipv6_ | +| | | +| | ETSI-NFV-TST001 | +| | | ++--------------+--------------------------------------------------------------+ +|applicability | Test case can be configured with different run step | +| | you can run setup, run benchmark, teardown independently | +| | SLA is optional. The SLA in this test case serves as an | +| | example. Considerably lower RTT is expected. | +| | | ++--------------+--------------------------------------------------------------+ +|pre-test | The test case image needs to be installed into Glance | +|conditions | with ping6 included in it. | +| | | +| | For Brahmaputra, a compass_os_nosdn_ha deploy scenario is | +| | need. more installer and more sdn deploy scenario will be | +| | supported soon | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | To setup IPV6 testing environment: | +| | 1. disable security group | +| | 2. create (ipv6, ipv4) router, network and subnet | +| | 3. create VRouter, VM1, VM2 | +| | | ++--------------+--------------------------------------------------------------+ +|step 2 | To run ping6 to verify IPV6 connectivity : | +| | 1. ssh to VM1 | +| | 2. Ping6 to ipv6 router from VM1 | +| | 3. Get the result(RTT) and logs are stored | +| | | ++--------------+--------------------------------------------------------------+ +|step 3 | To teardown IPV6 testing environment | +| | 1. delete VRouter, VM1, VM2 | +| | 2. delete (ipv6, ipv4) router, network and subnet | +| | 3. enable security group | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | Test should not PASS if any RTT is above the optional SLA | +| | value, or if there is a test case execution problem. | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc028.rst b/docs/testing/user/userguide/opnfv_yardstick_tc028.rst new file mode 100644 index 000000000..24206f33f --- /dev/null +++ b/docs/testing/user/userguide/opnfv_yardstick_tc028.rst @@ -0,0 +1,70 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Huawei Technologies Co., Ltd and others. + +************************************* +Yardstick Test Case Description TC028 +************************************* + +.. _Cyclictest: https://rt.wiki.kernel.org/index.php/Cyclictest + ++-----------------------------------------------------------------------------+ +|KVM Latency measurements | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC028_KVM Latency measurements | +| | | ++--------------+--------------------------------------------------------------+ +|metric | min, avg and max latency | +| | | ++--------------+--------------------------------------------------------------+ +|test purpose | To evaluate the IaaS KVM virtualization capability with | +| | regards to min, avg and max latency. | +| | The purpose is also to be able to spot trends. Test results, | +| | graphs and similar shall be stored for comparison reasons | +| | and product evolution understanding between different OPNFV | +| | versions and/or configurations. | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | file: samples/cyclictest-node-context.yaml | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | Cyclictest | +| | | +| | (Cyclictest is not always part of a Linux distribution, | +| | hence it needs to be installed. As an example see the | +| | /yardstick/tools/ directory for how to generate a Linux | +| | image with cyclictest included.) | +| | | ++--------------+--------------------------------------------------------------+ +|references | Cyclictest_ | +| | | ++--------------+--------------------------------------------------------------+ +|applicability | This test case is mainly for kvm4nfv project CI verify. | +| | Upgrade host linux kernel, boot a gust vm update it's linux | +| | kernel, and then run the cyclictest to test the new kernel | +| | is work well. | +| | | ++--------------+--------------------------------------------------------------+ +|pre-test | The test kernel rpm, test sequence scripts and test guest | +|conditions | image need put the right folders as specified in the test | +| | case yaml file. | +| | The test guest image needs with cyclictest included in it. | +| | | +| | No POD specific requirements have been identified. | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | The host and guest os kernel is upgraded. Cyclictest is | +| | invoked and logs are produced and stored. | +| | | +| | Result: Logs are stored. | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | Fails only if SLA is not passed, or if there is a test case | +| | execution problem. | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc037.rst b/docs/testing/user/userguide/opnfv_yardstick_tc037.rst new file mode 100644 index 000000000..5a6e1eaae --- /dev/null +++ b/docs/testing/user/userguide/opnfv_yardstick_tc037.rst @@ -0,0 +1,167 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Ericsson AB and others. + +************************************* +Yardstick Test Case Description TC037 +************************************* + +.. _cirros-image: https://download.cirros-cloud.net +.. _Ping: https://linux.die.net/man/8/ping +.. _pktgen: https://www.kernel.org/doc/Documentation/networking/pktgen.txt +.. _mpstat: http://www.linuxcommand.org/man_pages/mpstat1.html + ++-----------------------------------------------------------------------------+ +|Latency, CPU Load, Throughput, Packet Loss | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC037_LATENCY,CPU LOAD,THROUGHPUT, | +| | PACKET LOSS | +| | | ++--------------+--------------------------------------------------------------+ +|metric | Number of flows, latency, throughput, packet loss | +| | CPU utilization percentage, CPU interrupt per second | +| | | ++--------------+--------------------------------------------------------------+ +|test purpose | The purpose of TC037 is to evaluate the IaaS compute | +| | capacity and network performance with regards to CPU | +| | utilization, packet flows and network throughput, such as if | +| | and how different amounts of flows matter for the throughput | +| | between hosts on different compute blades, and the CPU load | +| | variation. | +| | | +| | Typically e.g. the performance of a vSwitch depends on the | +| | number of flows running through it. Also performance of | +| | other equipment or entities can depend on the number of | +| | flows or the packet sizes used | +| | | +| | The purpose is also to be able to spot the trends. | +| | Test results, graphs and similar shall be stored for | +| | comparison reasons and product evolution understanding | +| | between different OPNFV versions and/or configurations. | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | Ping, Pktgen, mpstat | +| | | +| | Ping is a computer network administration software utility | +| | used to test the reachability of a host on an Internet | +| | Protocol (IP) network. It measures the round-trip time for | +| | packet sent from the originating host to a destination | +| | computer that are echoed back to the source. | +| | | +| | Linux packet generator is a tool to generate packets at very | +| | high speed in the kernel. pktgen is mainly used to drive and | +| | LAN equipment test network. pktgen supports multi threading. | +| | To generate random MAC address, IP address, port number UDP | +| | packets, pktgen uses multiple CPU processors in the | +| | different PCI bus (PCI, PCIe bus) with Gigabit Ethernet | +| | tested (pktgen performance depends on the CPU processing | +| | speed, memory delay, PCI bus speed hardware parameters), | +| | Transmit data rate can be even larger than 10GBit/s. Visible | +| | can satisfy most card test requirements. | +| | | +| | The mpstat command writes to standard output activities for | +| | each available processor, processor 0 being the first one. | +| | Global average activities among all processors are also | +| | reported. The mpstat command can be used both on SMP and UP | +| | machines, but in the latter, only global average activities | +| | will be printed. | +| | | +| | (Ping is normally part of any Linux distribution, hence it | +| | doesn't need to be installed. It is also part of the | +| | Yardstick Docker image. | +| | For example also a Cirros image can be downloaded from | +| | cirros-image_, it includes ping. | +| | | +| | Pktgen and mpstat are not always part of a Linux | +| | distribution, hence it needs to be installed. It is part of | +| | the Yardstick Docker image. | +| | As an example see the /yardstick/tools/ directory for how | +| | to generate a Linux image with pktgen and mpstat included.) | +| | | ++--------------+--------------------------------------------------------------+ +|test | This test case uses Pktgen to generate packet flow between | +|description | two hosts for simulating network workloads on the SUT. | +| | Ping packets (ICMP protocol's mandatory ECHO_REQUEST | +| | datagram) are sent from a host VM to the target VM(s) to | +| | elicit ICMP ECHO_RESPONSE, meanwhile CPU activities are | +| | monitored by mpstat. | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | file: opnfv_yardstick_tc037.yaml | +| | | +| | Packet size is set to 64 bytes. | +| | Number of ports: 1, 10, 50, 100, 300, 500, 750 and 1000. | +| | The amount configured ports map from 2 up to 1001000 flows, | +| | respectively. Each port amount is run two times, for 20 | +| | seconds each. Then the next port_amount is run, and so on. | +| | During the test CPU load on both client and server, and the | +| | network latency between the client and server are measured. | +| | The client and server are distributed on different hardware. | +| | mpstat monitoring interval is set to 1 second. | +| | ping packet size is set to 100 bytes. | +| | For SLA max_ppm is set to 1000. | +| | | ++--------------+--------------------------------------------------------------+ +|applicability | Test can be configured with different: | +| | | +| | * pktgen packet sizes; | +| | * amount of flows; | +| | * test duration; | +| | * ping packet size; | +| | * mpstat monitor interval. | +| | | +| | Default values exist. | +| | | +| | SLA (optional): max_ppm: The number of packets per million | +| | packets sent that are acceptable to loose, not received. | +| | | ++--------------+--------------------------------------------------------------+ +|references | Ping_ | +| | | +| | mpstat_ | +| | | +| | pktgen_ | +| | | +| | ETSI-NFV-TST001 | +| | | ++--------------+--------------------------------------------------------------+ +|pre-test | The test case image needs to be installed into Glance | +|conditions | with pktgen, mpstat included in it. | +| | | +| | No POD specific requirements have been identified. | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | Two host VMs are booted, as server and client. | +| | | ++--------------+--------------------------------------------------------------+ +|step 2 | Yardstick is connected with the server VM by using ssh. | +| | 'pktgen_benchmark', "ping_benchmark" bash script are copyied | +| | from Jump Host to the server VM via the ssh tunnel. | +| | | ++--------------+--------------------------------------------------------------+ +|step 3 | An IP table is setup on server to monitor for received | +| | packets. | +| | | ++--------------+--------------------------------------------------------------+ +|step 4 | pktgen is invoked to generate packet flow between two server | +| | and client for simulating network workloads on the SUT. Ping | +| | is invoked. Ping packets are sent from server VM to client | +| | VM. mpstat is invoked, recording activities for each | +| | available processor. Results are processed and checked | +| | against the SLA. Logs are produced and stored. | +| | | +| | Result: Logs are stored. | +| | | ++--------------+--------------------------------------------------------------+ +|step 5 | Two host VMs are deleted. | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | Fails only if SLA is not passed, or if there is a test case | +| | execution problem. | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc038.rst b/docs/testing/user/userguide/opnfv_yardstick_tc038.rst new file mode 100644 index 000000000..692c76819 --- /dev/null +++ b/docs/testing/user/userguide/opnfv_yardstick_tc038.rst @@ -0,0 +1,104 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Ericsson AB and others. + +************************************* +Yardstick Test Case Description TC038 +************************************* + +.. _cirros: https://download.cirros-cloud.net +.. _pktgen: https://www.kernel.org/doc/Documentation/networking/pktgen.txt + ++-----------------------------------------------------------------------------+ +|Latency, CPU Load, Throughput, Packet Loss (Extended measurements) | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC038_Latency,CPU Load,Throughput,Packet Loss| +| | | ++--------------+--------------------------------------------------------------+ +|metric | Number of flows, latency, throughput, CPU load, packet loss | +| | | ++--------------+--------------------------------------------------------------+ +|test purpose | To evaluate the IaaS network performance with regards to | +| | flows and throughput, such as if and how different amounts | +| | of flows matter for the throughput between hosts on different| +| | compute blades. Typically e.g. the performance of a vSwitch | +| | depends on the number of flows running through it. Also | +| | performance of other equipment or entities can depend | +| | on the number of flows or the packet sizes used. | +| | The purpose is also to be able to spot trends. Test results, | +| | graphs ans similar shall be stored for comparison reasons and| +| | product evolution understanding between different OPNFV | +| | versions and/or configurations. | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | file: opnfv_yardstick_tc038.yaml | +| | | +| | Packet size: 64 bytes | +| | Number of ports: 1, 10, 50, 100, 300, 500, 750 and 1000. | +| | The amount configured ports map from 2 up to 1001000 flows, | +| | respectively. Each port amount is run ten times, for 20 | +| | seconds each. Then the next port_amount is run, and so on. | +| | During the test CPU load on both client and server, and the | +| | network latency between the client and server are measured. | +| | The client and server are distributed on different HW. | +| | For SLA max_ppm is set to 1000. | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | pktgen | +| | | +| | (Pktgen is not always part of a Linux distribution, hence it | +| | needs to be installed. It is part of the Yardstick Glance | +| | image. | +| | As an example see the /yardstick/tools/ directory for how | +| | to generate a Linux image with pktgen included.) | +| | | +| | ping | +| | | +| | Ping is normally part of any Linux distribution, hence it | +| | doesn't need to be installed. It is also part of the | +| | Yardstick Glance image. | +| | (For example also a cirros_ image can be downloaded, it | +| | includes ping) | +| | | +| | mpstat | +| | | +| | (Mpstat is not always part of a Linux distribution, hence it | +| | needs to be installed. It is part of the Yardstick Glance | +| | image. | +| | | ++--------------+--------------------------------------------------------------+ +|references | Ping and Mpstat man pages | +| | | +| | pktgen_ | +| | | +| | ETSI-NFV-TST001 | +| | | ++--------------+--------------------------------------------------------------+ +|applicability | Test can be configured with different packet sizes, amount | +| | of flows and test duration. Default values exist. | +| | | +| | SLA (optional): max_ppm: The number of packets per million | +| | packets sent that are acceptable to loose, not received. | +| | | ++--------------+--------------------------------------------------------------+ +|pre-test | The test case image needs to be installed into Glance | +|conditions | with pktgen included in it. | +| | | +| | No POD specific requirements have been identified. | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | The hosts are installed, as server and client. pktgen is | +| | invoked and logs are produced and stored. | +| | | +| | Result: Logs are stored. | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | Fails only if SLA is not passed, or if there is a test case | +| | execution problem. | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc040.rst b/docs/testing/user/userguide/opnfv_yardstick_tc040.rst new file mode 100644 index 000000000..d62fbf787 --- /dev/null +++ b/docs/testing/user/userguide/opnfv_yardstick_tc040.rst @@ -0,0 +1,65 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Huawei Technologies Co.,Ltd and others. + +************************************* +Yardstick Test Case Description TC040 +************************************* + +.. _Parser: https://wiki.opnfv.org/parser + ++-----------------------------------------------------------------------------+ +|Verify Parser Yang-to-Tosca | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC040 Verify Parser Yang-to-Tosca | +| | | ++--------------+--------------------------------------------------------------+ +|metric | 1. tosca file which is converted from yang file by Parser | +| | 2. result whether the output is same with expected outcome | ++--------------+--------------------------------------------------------------+ +|test purpose | To verify the function of Yang-to-Tosca in Parser. | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | file: opnfv_yardstick_tc040.yaml | +| | | +| | yangfile: the path of the yangfile which you want to convert | +| | toscafile: the path of the toscafile which is your expected | +| | outcome. | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | Parser | +| | | +| | (Parser is not part of a Linux distribution, hence it | +| | needs to be installed. As an example see the | +| | /yardstick/benchmark/scenarios/parser/parser_setup.sh for | +| | how to install it manual. Of course, it will be installed | +| | and uninstalled automatically when you run this test case | +| | by yardstick) | ++--------------+--------------------------------------------------------------+ +|references | Parser_ | +| | | +| | | ++--------------+--------------------------------------------------------------+ +|applicability | Test can be configured with different path of yangfile and | +| | toscafile to fit your real environment to verify Parser | +| | | ++--------------+--------------------------------------------------------------+ +|pre-test | No POD specific requirements have been identified. | +|conditions | it can be run without VM | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | parser is installed without VM, running Yang-to-Tosca module | +| | to convert yang file to tosca file, validating output against| +| | expected outcome. | +| | | +| | Result: Logs are stored. | ++--------------+--------------------------------------------------------------+ +|test verdict | Fails only if output is different with expected outcome | +| | or if there is a test case execution problem. | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc042.rst b/docs/testing/user/userguide/opnfv_yardstick_tc042.rst new file mode 100644 index 000000000..8660d9297 --- /dev/null +++ b/docs/testing/user/userguide/opnfv_yardstick_tc042.rst @@ -0,0 +1,87 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, ZTE and others. + +*************************************** +Yardstick Test Case Description TC0042 +*************************************** + +.. _DPDK: http://dpdk.org/doc/guides/index.html +.. _Testpmd: http://dpdk.org/doc/guides/testpmd_app_ug/index.html +.. _Pktgen-dpdk: http://pktgen.readthedocs.io/en/latest/index.html + ++-----------------------------------------------------------------------------+ +|Network Performance | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC042_DPDK pktgen latency measurements | +| | | ++--------------+--------------------------------------------------------------+ +|metric | L2 Network Latency | +| | | ++--------------+--------------------------------------------------------------+ +|test purpose | Measure L2 network latency when DPDK is enabled between hosts| +| | on different compute blades. | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | file: opnfv_yardstick_tc042.yaml | +| | | +| | * Packet size: 64 bytes | +| | * SLA(max_latency): 100usec | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | DPDK_ | +| | Pktgen-dpdk_ | +| | | +| | (DPDK and Pktgen-dpdk are not part of a Linux distribution, | +| | hence they needs to be installed. | +| | As an example see the /yardstick/tools/ directory for how to | +| | generate a Linux image with DPDK and pktgen-dpdk included.) | +| | | ++--------------+--------------------------------------------------------------+ +|references | DPDK_ | +| | | +| | Pktgen-dpdk_ | +| | | +| | ETSI-NFV-TST001 | +| | | ++--------------+--------------------------------------------------------------+ +|applicability | Test can be configured with different packet sizes. Default | +| | values exist. | +| | | ++--------------+--------------------------------------------------------------+ +|pre-test | The test case image needs to be installed into Glance | +|conditions | with DPDK and pktgen-dpdk included in it. | +| | | +| | The NICs of compute nodes must support DPDK on POD. | +| | | +| | And at least compute nodes setup hugepage. | +| | | +| | If you want to achievement a hight performance result, it is | +| | recommend to use NUAM, CPU pin, OVS and so on. | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | The hosts are installed on different blades, as server and | +| | client. Both server and client have three interfaces. The | +| | first one is management such as ssh. The other two are used | +| | by DPDK. | +| | | ++--------------+--------------------------------------------------------------+ +|step 2 | Testpmd_ is invoked with configurations to forward packets | +| | from one DPDK port to the other on server. | +| | | ++--------------+--------------------------------------------------------------+ +|step 3 | Pktgen-dpdk is invoked with configurations as a traffic | +| | generator and logs are produced and stored on client. | +| | | +| | Result: Logs are stored. | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | Fails only if SLA is not passed, or if there is a test case | +| | execution problem. | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc043.rst b/docs/testing/user/userguide/opnfv_yardstick_tc043.rst new file mode 100644 index 000000000..a873696dc --- /dev/null +++ b/docs/testing/user/userguide/opnfv_yardstick_tc043.rst @@ -0,0 +1,102 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Huawei Technologies Co.,Ltd and others. + +************************************* +Yardstick Test Case Description TC043 +************************************* + +.. _cirros-image: https://download.cirros-cloud.net +.. _Ping: https://linux.die.net/man/8/ping + ++-----------------------------------------------------------------------------+ +|Network Latency Between NFVI Nodes | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC043_LATENCY_BETWEEN_NFVI_NODES | +| | | ++--------------+--------------------------------------------------------------+ +|metric | RTT (Round Trip Time) | +| | | ++--------------+--------------------------------------------------------------+ +|test purpose | The purpose of TC043 is to do a basic verification that | +| | network latency is within acceptable boundaries when packets | +| | travel between different NFVI nodes. | +| | | +| | The purpose is also to be able to spot the trends. | +| | Test results, graphs and similar shall be stored for | +| | comparison reasons and product evolution understanding | +| | between different OPNFV versions and/or configurations. | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | ping | +| | | +| | Ping is a computer network administration software utility | +| | used to test the reachability of a host on an Internet | +| | Protocol (IP) network. It measures the round-trip time for | +| | packet sent from the originating host to a destination | +| | computer that are echoed back to the source. | +| | | ++--------------+--------------------------------------------------------------+ +|test topology | Ping packets (ICMP protocol's mandatory ECHO_REQUEST | +| | datagram) are sent from host node to target node to elicit | +| | ICMP ECHO_RESPONSE. | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | file: opnfv_yardstick_tc043.yaml | +| | | +| | Packet size 100 bytes. Total test duration 600 seconds. | +| | One ping each 10 seconds. SLA RTT is set to maximum 10 ms. | +| | | ++--------------+--------------------------------------------------------------+ +|applicability | This test case can be configured with different: | +| | | +| | * packet sizes; | +| | * burst sizes; | +| | * ping intervals; | +| | * test durations; | +| | * test iterations. | +| | | +| | Default values exist. | +| | | +| | SLA is optional. The SLA in this test case serves as an | +| | example. Considerably lower RTT is expected, and also normal | +| | to achieve in balanced L2 environments. However, to cover | +| | most configurations, both bare metal and fully virtualized | +| | ones, this value should be possible to achieve and | +| | acceptable for black box testing. Many real time | +| | applications start to suffer badly if the RTT time is higher | +| | than this. Some may suffer bad also close to this RTT, while | +| | others may not suffer at all. It is a compromise that may | +| | have to be tuned for different configuration purposes. | +| | | ++--------------+--------------------------------------------------------------+ +|references | Ping_ | +| | | +| | ETSI-NFV-TST001 | +| | | ++--------------+--------------------------------------------------------------+ +|pre_test | Each pod node must have ping included in it. | +|conditions | | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | Yardstick is connected with the NFVI node by using ssh. | +| | 'ping_benchmark' bash script is copyied from Jump Host to | +| | the NFVI node via the ssh tunnel. | +| | | ++--------------+--------------------------------------------------------------+ +|step 2 | Ping is invoked. Ping packets are sent from server node to | +| | client node. RTT results are calculated and checked against | +| | the SLA. Logs are produced and stored. | +| | | +| | Result: Logs are stored. | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | Test should not PASS if any RTT is above the optional SLA | +| | value, or if there is a test case execution problem. | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc044.rst b/docs/testing/user/userguide/opnfv_yardstick_tc044.rst new file mode 100644 index 000000000..2be8517a1 --- /dev/null +++ b/docs/testing/user/userguide/opnfv_yardstick_tc044.rst @@ -0,0 +1,82 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Huawei Technologies Co.,Ltd and others. + +************************************* +Yardstick Test Case Description TC044 +************************************* + +.. _man-pages: http://manpages.ubuntu.com/manpages/trusty/en/man1/free.1.html + ++-----------------------------------------------------------------------------+ +|Memory Utilization | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC044_Memory Utilization | +| | | ++--------------+--------------------------------------------------------------+ +|metric | Memory utilization | +| | | ++--------------+--------------------------------------------------------------+ +|test purpose | To evaluate the IaaS compute capability with regards to | +| | memory utilization.This test case should be run in parallel | +| | to other Yardstick test cases and not run as a stand-alone | +| | test case. | +| | Measure the memory usage statistics including used memory, | +| | free memory, buffer, cache and shared memory. | +| | Both average and maximun values are obtained. | +| | The purpose is also to be able to spot trends. | +| | Test results, graphs and similar shall be stored for | +| | comparison reasons and product evolution understanding | +| | between different OPNFV versions and/or configurations. | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | File: memload.yaml (in the 'samples' directory) | +| | | +| | * interval: 1 - repeat, pausing every 1 seconds in-between. | +| | * count: 10 - display statistics 10 times, then exit. | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | free | +| | | +| | free provides information about unused and used memory and | +| | swap space on any computer running Linux or another Unix-like| +| | operating system. | +| | free is normally part of a Linux distribution, hence it | +| | doesn't needs to be installed. | +| | | ++--------------+--------------------------------------------------------------+ +|references | man-pages_ | +| | | +| | ETSI-NFV-TST001 | +| | | ++--------------+--------------------------------------------------------------+ +|applicability | Test can be configured with different: | +| | | +| | * interval; | +| | * count; | +| | * runner Iteration and intervals. | +| | | +| | There are default values for each above-mentioned option. | +| | Run in background with other test cases. | +| | | ++--------------+--------------------------------------------------------------+ +|pre-test | The test case image needs to be installed into Glance | +|conditions | with free included in the image. | +| | | +| | No POD specific requirements have been identified. | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | The host is installed as client. The related TC, or TCs, is | +| | invoked and free logs are produced and stored. | +| | | +| | Result: logs are stored. | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | None. Memory utilization results are fetched and stored. | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc045.rst b/docs/testing/user/userguide/opnfv_yardstick_tc045.rst new file mode 100644 index 000000000..0b0993c34 --- /dev/null +++ b/docs/testing/user/userguide/opnfv_yardstick_tc045.rst @@ -0,0 +1,139 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Yin Kanglin and others. +.. 14_ykl@tongji.edu.cn + +************************************* +Yardstick Test Case Description TC045 +************************************* + ++-----------------------------------------------------------------------------+ +|Control Node Openstack Service High Availability - Neutron Server | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC045: Control node Openstack service down - | +| | neutron server | ++--------------+--------------------------------------------------------------+ +|test purpose | This test case will verify the high availability of the | +| | network service provided by OpenStack (neutro-server) on | +| | control node. | +| | | ++--------------+--------------------------------------------------------------+ +|test method | This test case kills the processes of neutron-server service | +| | on a selected control node, then checks whether the request | +| | of the related Openstack command is OK and the killed | +| | processes are recovered. | +| | | ++--------------+--------------------------------------------------------------+ +|attackers | In this test case, an attacker called "kill-process" is | +| | needed. This attacker includes three parameters: | +| | 1) fault_type: which is used for finding the attacker's | +| | scripts. It should be always set to "kill-process" in this | +| | test case. | +| | 2) process_name: which is the process name of the specified | +| | OpenStack service. If there are multiple processes use the | +| | same name on the host, all of them are killed by this | +| | attacker. | +| | In this case. This parameter should always set to "neutron- | +| | server". | +| | 3) host: which is the name of a control node being attacked. | +| | | +| | e.g. | +| | -fault_type: "kill-process" | +| | -process_name: "neutron-server" | +| | -host: node1 | +| | | ++--------------+--------------------------------------------------------------+ +|monitors | In this test case, two kinds of monitor are needed: | +| | 1. the "openstack-cmd" monitor constantly request a specific | +| | Openstack command, which needs two parameters: | +| | 1) monitor_type: which is used for finding the monitor class | +| | and related scritps. It should be always set to | +| | "openstack-cmd" for this monitor. | +| | 2) command_name: which is the command name used for request. | +| | In this case, the command name should be neutron related | +| | commands. | +| | | +| | 2. the "process" monitor check whether a process is running | +| | on a specific node, which needs three parameters: | +| | 1) monitor_type: which used for finding the monitor class and| +| | related scritps. It should be always set to "process" | +| | for this monitor. | +| | 2) process_name: which is the process name for monitor | +| | 3) host: which is the name of the node runing the process | +| | | +| | e.g. | +| | monitor1: | +| | -monitor_type: "openstack-cmd" | +| | -command_name: "neutron agent-list" | +| | monitor2: | +| | -monitor_type: "process" | +| | -process_name: "neutron-server" | +| | -host: node1 | +| | | ++--------------+--------------------------------------------------------------+ +|metrics | In this test case, there are two metrics: | +| | 1)service_outage_time: which indicates the maximum outage | +| | time (seconds) of the specified Openstack command request. | +| | 2)process_recover_time: which indicates the maximun time | +| | (seconds) from the process being killed to recovered | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | Developed by the project. Please see folder: | +| | "yardstick/benchmark/scenarios/availability/ha_tools" | +| | | ++--------------+--------------------------------------------------------------+ +|references | ETSI NFV REL001 | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | This test case needs two configuration files: | +| | 1) test case file: opnfv_yardstick_tc045.yaml | +| | -Attackers: see above "attackers" discription | +| | -waiting_time: which is the time (seconds) from the process | +| | being killed to stoping monitors the monitors | +| | -Monitors: see above "monitors" discription | +| | -SLA: see above "metrics" discription | +| | | +| | 2)POD file: pod.yaml | +| | The POD configuration should record on pod.yaml first. | +| | the "host" item in this test case will use the node name in | +| | the pod.yaml. | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | start monitors: | +| | each monitor will run with independently process | +| | | +| | Result: The monitor info will be collected. | +| | | ++--------------+--------------------------------------------------------------+ +|step 2 | do attacker: connect the host through SSH, and then execute | +| | the kill process script with param value specified by | +| | "process_name" | +| | | +| | Result: Process will be killed. | +| | | ++--------------+--------------------------------------------------------------+ +|step 3 | stop monitors after a period of time specified by | +| | "waiting_time" | +| | | +| | Result: The monitor info will be aggregated. | +| | | ++--------------+--------------------------------------------------------------+ +|step 4 | verify the SLA | +| | | +| | Result: The test case is passed or not. | +| | | ++--------------+--------------------------------------------------------------+ +|post-action | It is the action when the test cases exist. It will check the| +| | status of the specified process on the host, and restart the | +| | process if it is not running for next test cases | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | Fails only if SLA is not passed, or if there is a test case | +| | execution problem. | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc046.rst b/docs/testing/user/userguide/opnfv_yardstick_tc046.rst new file mode 100644 index 000000000..cce6c6884 --- /dev/null +++ b/docs/testing/user/userguide/opnfv_yardstick_tc046.rst @@ -0,0 +1,138 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Yin Kanglin and others. +.. 14_ykl@tongji.edu.cn + +************************************* +Yardstick Test Case Description TC046 +************************************* + ++-----------------------------------------------------------------------------+ +|Control Node Openstack Service High Availability - Keystone | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC046: Control node Openstack service down - | +| | keystone | ++--------------+--------------------------------------------------------------+ +|test purpose | This test case will verify the high availability of the | +| | user service provided by OpenStack (keystone) on control | +| | node. | +| | | ++--------------+--------------------------------------------------------------+ +|test method | This test case kills the processes of keystone service on a | +| | selected control node, then checks whether the request of | +| | the related Openstack command is OK and the killed processes | +| | are recovered. | +| | | ++--------------+--------------------------------------------------------------+ +|attackers | In this test case, an attacker called "kill-process" is | +| | needed. This attacker includes three parameters: | +| | 1) fault_type: which is used for finding the attacker's | +| | scripts. It should be always set to "kill-process" in this | +| | test case. | +| | 2) process_name: which is the process name of the specified | +| | OpenStack service. If there are multiple processes use the | +| | same name on the host, all of them are killed by this | +| | attacker. | +| | In this case. This parameter should always set to "keystone" | +| | 3) host: which is the name of a control node being attacked. | +| | | +| | e.g. | +| | -fault_type: "kill-process" | +| | -process_name: "keystone" | +| | -host: node1 | +| | | ++--------------+--------------------------------------------------------------+ +|monitors | In this test case, two kinds of monitor are needed: | +| | 1. the "openstack-cmd" monitor constantly request a specific | +| | Openstack command, which needs two parameters: | +| | 1) monitor_type: which is used for finding the monitor class | +| | and related scritps. It should be always set to | +| | "openstack-cmd" for this monitor. | +| | 2) command_name: which is the command name used for request. | +| | In this case, the command name should be keystone related | +| | commands. | +| | | +| | 2. the "process" monitor check whether a process is running | +| | on a specific node, which needs three parameters: | +| | 1) monitor_type: which used for finding the monitor class and| +| | related scritps. It should be always set to "process" | +| | for this monitor. | +| | 2) process_name: which is the process name for monitor | +| | 3) host: which is the name of the node runing the process | +| | | +| | e.g. | +| | monitor1: | +| | -monitor_type: "openstack-cmd" | +| | -command_name: "keystone user-list" | +| | monitor2: | +| | -monitor_type: "process" | +| | -process_name: "keystone" | +| | -host: node1 | +| | | ++--------------+--------------------------------------------------------------+ +|metrics | In this test case, there are two metrics: | +| | 1)service_outage_time: which indicates the maximum outage | +| | time (seconds) of the specified Openstack command request. | +| | 2)process_recover_time: which indicates the maximun time | +| | (seconds) from the process being killed to recovered | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | Developed by the project. Please see folder: | +| | "yardstick/benchmark/scenarios/availability/ha_tools" | +| | | ++--------------+--------------------------------------------------------------+ +|references | ETSI NFV REL001 | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | This test case needs two configuration files: | +| | 1) test case file: opnfv_yardstick_tc046.yaml | +| | -Attackers: see above "attackers" discription | +| | -waiting_time: which is the time (seconds) from the process | +| | being killed to stoping monitors the monitors | +| | -Monitors: see above "monitors" discription | +| | -SLA: see above "metrics" discription | +| | | +| | 2)POD file: pod.yaml | +| | The POD configuration should record on pod.yaml first. | +| | the "host" item in this test case will use the node name in | +| | the pod.yaml. | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | start monitors: | +| | each monitor will run with independently process | +| | | +| | Result: The monitor info will be collected. | +| | | ++--------------+--------------------------------------------------------------+ +|step 2 | do attacker: connect the host through SSH, and then execute | +| | the kill process script with param value specified by | +| | "process_name" | +| | | +| | Result: Process will be killed. | +| | | ++--------------+--------------------------------------------------------------+ +|step 3 | stop monitors after a period of time specified by | +| | "waiting_time" | +| | | +| | Result: The monitor info will be aggregated. | +| | | ++--------------+--------------------------------------------------------------+ +|step 4 | verify the SLA | +| | | +| | Result: The test case is passed or not. | +| | | ++--------------+--------------------------------------------------------------+ +|post-action | It is the action when the test cases exist. It will check the| +| | status of the specified process on the host, and restart the | +| | process if it is not running for next test cases | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | Fails only if SLA is not passed, or if there is a test case | +| | execution problem. | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc047.rst b/docs/testing/user/userguide/opnfv_yardstick_tc047.rst new file mode 100644 index 000000000..95158cfd6 --- /dev/null +++ b/docs/testing/user/userguide/opnfv_yardstick_tc047.rst @@ -0,0 +1,139 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Yin Kanglin and others. +.. 14_ykl@tongji.edu.cn + +************************************* +Yardstick Test Case Description TC047 +************************************* + ++-----------------------------------------------------------------------------+ +|Control Node Openstack Service High Availability - Glance Api | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC047: Control node Openstack service down - | +| | glance api | ++--------------+--------------------------------------------------------------+ +|test purpose | This test case will verify the high availability of the | +| | image service provided by OpenStack (glance-api) on control | +| | node. | +| | | ++--------------+--------------------------------------------------------------+ +|test method | This test case kills the processes of glance-api service on | +| | a selected control node, then checks whether the request of | +| | the related Openstack command is OK and the killed processes | +| | are recovered. | +| | | ++--------------+--------------------------------------------------------------+ +|attackers | In this test case, an attacker called "kill-process" is | +| | needed. This attacker includes three parameters: | +| | 1) fault_type: which is used for finding the attacker's | +| | scripts. It should be always set to "kill-process" in this | +| | test case. | +| | 2) process_name: which is the process name of the specified | +| | OpenStack service. If there are multiple processes use the | +| | same name on the host, all of them are killed by this | +| | attacker. | +| | In this case. This parameter should always set to "glance- | +| | api". | +| | 3) host: which is the name of a control node being attacked. | +| | | +| | e.g. | +| | -fault_type: "kill-process" | +| | -process_name: "glance-api" | +| | -host: node1 | +| | | ++--------------+--------------------------------------------------------------+ +|monitors | In this test case, two kinds of monitor are needed: | +| | 1. the "openstack-cmd" monitor constantly request a specific | +| | Openstack command, which needs two parameters: | +| | 1) monitor_type: which is used for finding the monitor class | +| | and related scritps. It should be always set to | +| | "openstack-cmd" for this monitor. | +| | 2) command_name: which is the command name used for request. | +| | In this case, the command name should be glance related | +| | commands. | +| | | +| | 2. the "process" monitor check whether a process is running | +| | on a specific node, which needs three parameters: | +| | 1) monitor_type: which used for finding the monitor class and| +| | related scritps. It should be always set to "process" | +| | for this monitor. | +| | 2) process_name: which is the process name for monitor | +| | 3) host: which is the name of the node runing the process | +| | | +| | e.g. | +| | monitor1: | +| | -monitor_type: "openstack-cmd" | +| | -command_name: "glance image-list" | +| | monitor2: | +| | -monitor_type: "process" | +| | -process_name: "glance-api" | +| | -host: node1 | +| | | ++--------------+--------------------------------------------------------------+ +|metrics | In this test case, there are two metrics: | +| | 1)service_outage_time: which indicates the maximum outage | +| | time (seconds) of the specified Openstack command request. | +| | 2)process_recover_time: which indicates the maximun time | +| | (seconds) from the process being killed to recovered | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | Developed by the project. Please see folder: | +| | "yardstick/benchmark/scenarios/availability/ha_tools" | +| | | ++--------------+--------------------------------------------------------------+ +|references | ETSI NFV REL001 | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | This test case needs two configuration files: | +| | 1) test case file: opnfv_yardstick_tc047.yaml | +| | -Attackers: see above "attackers" discription | +| | -waiting_time: which is the time (seconds) from the process | +| | being killed to stoping monitors the monitors | +| | -Monitors: see above "monitors" discription | +| | -SLA: see above "metrics" discription | +| | | +| | 2)POD file: pod.yaml | +| | The POD configuration should record on pod.yaml first. | +| | the "host" item in this test case will use the node name in | +| | the pod.yaml. | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | start monitors: | +| | each monitor will run with independently process | +| | | +| | Result: The monitor info will be collected. | +| | | ++--------------+--------------------------------------------------------------+ +|step 2 | do attacker: connect the host through SSH, and then execute | +| | the kill process script with param value specified by | +| | "process_name" | +| | | +| | Result: Process will be killed. | +| | | ++--------------+--------------------------------------------------------------+ +|step 3 | stop monitors after a period of time specified by | +| | "waiting_time" | +| | | +| | Result: The monitor info will be aggregated. | +| | | ++--------------+--------------------------------------------------------------+ +|step 4 | verify the SLA | +| | | +| | Result: The test case is passed or not. | +| | | ++--------------+--------------------------------------------------------------+ +|post-action | It is the action when the test cases exist. It will check the| +| | status of the specified process on the host, and restart the | +| | process if it is not running for next test cases | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | Fails only if SLA is not passed, or if there is a test case | +| | execution problem. | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc048.rst b/docs/testing/user/userguide/opnfv_yardstick_tc048.rst new file mode 100644 index 000000000..21c00d1fe --- /dev/null +++ b/docs/testing/user/userguide/opnfv_yardstick_tc048.rst @@ -0,0 +1,139 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Yin Kanglin and others. +.. 14_ykl@tongji.edu.cn + +************************************* +Yardstick Test Case Description TC048 +************************************* + ++-----------------------------------------------------------------------------+ +|Control Node Openstack Service High Availability - Cinder Api | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC048: Control node Openstack service down - | +| | cinder api | ++--------------+--------------------------------------------------------------+ +|test purpose | This test case will verify the high availability of the | +| | volume service provided by OpenStack (cinder-api) on control | +| | node. | +| | | ++--------------+--------------------------------------------------------------+ +|test method | This test case kills the processes of cinder-api service on | +| | a selected control node, then checks whether the request of | +| | the related Openstack command is OK and the killed processes | +| | are recovered. | +| | | ++--------------+--------------------------------------------------------------+ +|attackers | In this test case, an attacker called "kill-process" is | +| | needed. This attacker includes three parameters: | +| | 1) fault_type: which is used for finding the attacker's | +| | scripts. It should be always set to "kill-process" in this | +| | test case. | +| | 2) process_name: which is the process name of the specified | +| | OpenStack service. If there are multiple processes use the | +| | same name on the host, all of them are killed by this | +| | attacker. | +| | In this case. This parameter should always set to "cinder- | +| | api". | +| | 3) host: which is the name of a control node being attacked. | +| | | +| | e.g. | +| | -fault_type: "kill-process" | +| | -process_name: "cinder-api" | +| | -host: node1 | +| | | ++--------------+--------------------------------------------------------------+ +|monitors | In this test case, two kinds of monitor are needed: | +| | 1. the "openstack-cmd" monitor constantly request a specific | +| | Openstack command, which needs two parameters: | +| | 1) monitor_type: which is used for finding the monitor class | +| | and related scritps. It should be always set to | +| | "openstack-cmd" for this monitor. | +| | 2) command_name: which is the command name used for request. | +| | In this case, the command name should be cinder related | +| | commands. | +| | | +| | 2. the "process" monitor check whether a process is running | +| | on a specific node, which needs three parameters: | +| | 1) monitor_type: which used for finding the monitor class and| +| | related scritps. It should be always set to "process" | +| | for this monitor. | +| | 2) process_name: which is the process name for monitor | +| | 3) host: which is the name of the node runing the process | +| | | +| | e.g. | +| | monitor1: | +| | -monitor_type: "openstack-cmd" | +| | -command_name: "cinder list" | +| | monitor2: | +| | -monitor_type: "process" | +| | -process_name: "cinder-api" | +| | -host: node1 | +| | | ++--------------+--------------------------------------------------------------+ +|metrics | In this test case, there are two metrics: | +| | 1)service_outage_time: which indicates the maximum outage | +| | time (seconds) of the specified Openstack command request. | +| | 2)process_recover_time: which indicates the maximun time | +| | (seconds) from the process being killed to recovered | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | Developed by the project. Please see folder: | +| | "yardstick/benchmark/scenarios/availability/ha_tools" | +| | | ++--------------+--------------------------------------------------------------+ +|references | ETSI NFV REL001 | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | This test case needs two configuration files: | +| | 1) test case file: opnfv_yardstick_tc048.yaml | +| | -Attackers: see above "attackers" discription | +| | -waiting_time: which is the time (seconds) from the process | +| | being killed to stoping monitors the monitors | +| | -Monitors: see above "monitors" discription | +| | -SLA: see above "metrics" discription | +| | | +| | 2)POD file: pod.yaml | +| | The POD configuration should record on pod.yaml first. | +| | the "host" item in this test case will use the node name in | +| | the pod.yaml. | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | start monitors: | +| | each monitor will run with independently process | +| | | +| | Result: The monitor info will be collected. | +| | | ++--------------+--------------------------------------------------------------+ +|step 2 | do attacker: connect the host through SSH, and then execute | +| | the kill process script with param value specified by | +| | "process_name" | +| | | +| | Result: Process will be killed. | +| | | ++--------------+--------------------------------------------------------------+ +|step 3 | stop monitors after a period of time specified by | +| | "waiting_time" | +| | | +| | Result: The monitor info will be aggregated. | +| | | ++--------------+--------------------------------------------------------------+ +|step 4 | verify the SLA | +| | | +| | Result: The test case is passed or not. | +| | | ++--------------+--------------------------------------------------------------+ +|post-action | It is the action when the test cases exist. It will check the| +| | status of the specified process on the host, and restart the | +| | process if it is not running for next test cases | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | Fails only if SLA is not passed, or if there is a test case | +| | execution problem. | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc049.rst b/docs/testing/user/userguide/opnfv_yardstick_tc049.rst new file mode 100644 index 000000000..f58bb9989 --- /dev/null +++ b/docs/testing/user/userguide/opnfv_yardstick_tc049.rst @@ -0,0 +1,139 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Yin Kanglin and others. +.. 14_ykl@tongji.edu.cn + +************************************* +Yardstick Test Case Description TC049 +************************************* + ++-----------------------------------------------------------------------------+ +|Control Node Openstack Service High Availability - Swift Proxy | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC049: Control node Openstack service down - | +| | swift proxy | ++--------------+--------------------------------------------------------------+ +|test purpose | This test case will verify the high availability of the | +| | storage service provided by OpenStack (swift-proxy) on | +| | control node. | +| | | ++--------------+--------------------------------------------------------------+ +|test method | This test case kills the processes of swift-proxy service on | +| | a selected control node, then checks whether the request of | +| | the related Openstack command is OK and the killed processes | +| | are recovered. | +| | | ++--------------+--------------------------------------------------------------+ +|attackers | In this test case, an attacker called "kill-process" is | +| | needed. This attacker includes three parameters: | +| | 1) fault_type: which is used for finding the attacker's | +| | scripts. It should be always set to "kill-process" in this | +| | test case. | +| | 2) process_name: which is the process name of the specified | +| | OpenStack service. If there are multiple processes use the | +| | same name on the host, all of them are killed by this | +| | attacker. | +| | In this case. This parameter should always set to "swift- | +| | proxy". | +| | 3) host: which is the name of a control node being attacked. | +| | | +| | e.g. | +| | -fault_type: "kill-process" | +| | -process_name: "swift-proxy" | +| | -host: node1 | +| | | ++--------------+--------------------------------------------------------------+ +|monitors | In this test case, two kinds of monitor are needed: | +| | 1. the "openstack-cmd" monitor constantly request a specific | +| | Openstack command, which needs two parameters: | +| | 1) monitor_type: which is used for finding the monitor class | +| | and related scritps. It should be always set to | +| | "openstack-cmd" for this monitor. | +| | 2) command_name: which is the command name used for request. | +| | In this case, the command name should be swift related | +| | commands. | +| | | +| | 2. the "process" monitor check whether a process is running | +| | on a specific node, which needs three parameters: | +| | 1) monitor_type: which used for finding the monitor class and| +| | related scritps. It should be always set to "process" | +| | for this monitor. | +| | 2) process_name: which is the process name for monitor | +| | 3) host: which is the name of the node runing the process | +| | | +| | e.g. | +| | monitor1: | +| | -monitor_type: "openstack-cmd" | +| | -command_name: "swift stat" | +| | monitor2: | +| | -monitor_type: "process" | +| | -process_name: "swift-proxy" | +| | -host: node1 | +| | | ++--------------+--------------------------------------------------------------+ +|metrics | In this test case, there are two metrics: | +| | 1)service_outage_time: which indicates the maximum outage | +| | time (seconds) of the specified Openstack command request. | +| | 2)process_recover_time: which indicates the maximun time | +| | (seconds) from the process being killed to recovered | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | Developed by the project. Please see folder: | +| | "yardstick/benchmark/scenarios/availability/ha_tools" | +| | | ++--------------+--------------------------------------------------------------+ +|references | ETSI NFV REL001 | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | This test case needs two configuration files: | +| | 1) test case file: opnfv_yardstick_tc049.yaml | +| | -Attackers: see above "attackers" discription | +| | -waiting_time: which is the time (seconds) from the process | +| | being killed to stoping monitors the monitors | +| | -Monitors: see above "monitors" discription | +| | -SLA: see above "metrics" discription | +| | | +| | 2)POD file: pod.yaml | +| | The POD configuration should record on pod.yaml first. | +| | the "host" item in this test case will use the node name in | +| | the pod.yaml. | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | start monitors: | +| | each monitor will run with independently process | +| | | +| | Result: The monitor info will be collected. | +| | | ++--------------+--------------------------------------------------------------+ +|step 2 | do attacker: connect the host through SSH, and then execute | +| | the kill process script with param value specified by | +| | "process_name" | +| | | +| | Result: Process will be killed. | +| | | ++--------------+--------------------------------------------------------------+ +|step 3 | stop monitors after a period of time specified by | +| | "waiting_time" | +| | | +| | Result: The monitor info will be aggregated. | +| | | ++--------------+--------------------------------------------------------------+ +|step 4 | verify the SLA | +| | | +| | Result: The test case is passed or not. | +| | | ++--------------+--------------------------------------------------------------+ +|post-action | It is the action when the test cases exist. It will check the| +| | status of the specified process on the host, and restart the | +| | process if it is not running for next test cases | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | Fails only if SLA is not passed, or if there is a test case | +| | execution problem. | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc050.rst b/docs/testing/user/userguide/opnfv_yardstick_tc050.rst new file mode 100644 index 000000000..8890c9d53 --- /dev/null +++ b/docs/testing/user/userguide/opnfv_yardstick_tc050.rst @@ -0,0 +1,135 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Yin Kanglin and others. +.. 14_ykl@tongji.edu.cn + +************************************* +Yardstick Test Case Description TC050 +************************************* + ++-----------------------------------------------------------------------------+ +|OpenStack Controller Node Network High Availability | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC050: OpenStack Controller Node Network | +| | High Availability | ++--------------+--------------------------------------------------------------+ +|test purpose | This test case will verify the high availability of control | +| | node. When one of the controller failed to connect the | +| | network, which breaks down the Openstack services on this | +| | node. These Openstack service should able to be accessed by | +| | other controller nodes, and the services on failed | +| | controller node should be isolated. | ++--------------+--------------------------------------------------------------+ +|test method | This test case turns off the network interfaces of a | +| | specified control node, then checks whether all services | +| | provided by the control node are OK with some monitor tools. | ++--------------+--------------------------------------------------------------+ +|attackers | In this test case, an attacker called "close-interface" is | +| | needed. This attacker includes three parameters: | +| | 1) fault_type: which is used for finding the attacker's | +| | scripts. It should be always set to "close-interface" in | +| | this test case. | +| | 2) host: which is the name of a control node being attacked. | +| | 3) interface: the network interface to be turned off. | +| | | +| | There are four instance of the "close-interface" monitor: | +| | attacker1(for public netork): | +| | -fault_type: "close-interface" | +| | -host: node1 | +| | -interface: "br-ex" | +| | attacker2(for management netork): | +| | -fault_type: "close-interface" | +| | -host: node1 | +| | -interface: "br-mgmt" | +| | attacker3(for storage netork): | +| | -fault_type: "close-interface" | +| | -host: node1 | +| | -interface: "br-storage" | +| | attacker4(for private netork): | +| | -fault_type: "close-interface" | +| | -host: node1 | +| | -interface: "br-mesh" | ++--------------+--------------------------------------------------------------+ +|monitors | In this test case, the monitor named "openstack-cmd" is | +| | needed. The monitor needs needs two parameters: | +| | 1) monitor_type: which is used for finding the monitor class | +| | and related scritps. It should be always set to | +| | "openstack-cmd" for this monitor. | +| | 2) command_name: which is the command name used for request | +| | | +| | There are four instance of the "openstack-cmd" monitor: | +| | monitor1: | +| | -monitor_type: "openstack-cmd" | +| | -command_name: "nova image-list" | +| | monitor2: | +| | -monitor_type: "openstack-cmd" | +| | -command_name: "neutron router-list" | +| | monitor3: | +| | -monitor_type: "openstack-cmd" | +| | -command_name: "heat stack-list" | +| | monitor4: | +| | -monitor_type: "openstack-cmd" | +| | -command_name: "cinder list" | ++--------------+--------------------------------------------------------------+ +|metrics | In this test case, there is one metric: | +| | 1)service_outage_time: which indicates the maximum outage | +| | time (seconds) of the specified Openstack command request. | ++--------------+--------------------------------------------------------------+ +|test tool | Developed by the project. Please see folder: | +| | "yardstick/benchmark/scenarios/availability/ha_tools" | +| | | ++--------------+--------------------------------------------------------------+ +|references | ETSI NFV REL001 | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | This test case needs two configuration files: | +| | 1) test case file: opnfv_yardstick_tc050.yaml | +| | -Attackers: see above "attackers" discription | +| | -waiting_time: which is the time (seconds) from the process | +| | being killed to stoping monitors the monitors | +| | -Monitors: see above "monitors" discription | +| | -SLA: see above "metrics" discription | +| | | +| | 2)POD file: pod.yaml | +| | The POD configuration should record on pod.yaml first. | +| | the "host" item in this test case will use the node name in | +| | the pod.yaml. | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | start monitors: | +| | each monitor will run with independently process | +| | | +| | Result: The monitor info will be collected. | +| | | ++--------------+--------------------------------------------------------------+ +|step 2 | do attacker: connect the host through SSH, and then execute | +| | the turnoff network interface script with param value | +| | specified by "interface". | +| | | +| | Result: Network interfaces will be turned down. | +| | | ++--------------+--------------------------------------------------------------+ +|step 3 | stop monitors after a period of time specified by | +| | "waiting_time" | +| | | +| | Result: The monitor info will be aggregated. | +| | | ++--------------+--------------------------------------------------------------+ +|step 4 | verify the SLA | +| | | +| | Result: The test case is passed or not. | +| | | ++--------------+--------------------------------------------------------------+ +|post-action | It is the action when the test cases exist. It turns up the | +| | network interface of the control node if it is not turned | +| | up. | ++--------------+--------------------------------------------------------------+ +|test verdict | Fails only if SLA is not passed, or if there is a test case | +| | execution problem. | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc051.rst b/docs/testing/user/userguide/opnfv_yardstick_tc051.rst new file mode 100644 index 000000000..3402ccd92 --- /dev/null +++ b/docs/testing/user/userguide/opnfv_yardstick_tc051.rst @@ -0,0 +1,117 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Yin Kanglin and others. +.. 14_ykl@tongji.edu.cn + +************************************* +Yardstick Test Case Description TC051 +************************************* + ++-----------------------------------------------------------------------------+ +|OpenStack Controller Node CPU Overload High Availability | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC051: OpenStack Controller Node CPU | +| | Overload High Availability | ++--------------+--------------------------------------------------------------+ +|test purpose | This test case will verify the high availability of control | +| | node. When the CPU usage of a specified controller node is | +| | stressed to 100%, which breaks down the Openstack services | +| | on this node. These Openstack service should able to be | +| | accessed by other controller nodes, and the services on | +| | failed controller node should be isolated. | ++--------------+--------------------------------------------------------------+ +|test method | This test case stresses the CPU uasge of a specified control | +| | node to 100%, then checks whether all services provided by | +| | the environment are OK with some monitor tools. | ++--------------+--------------------------------------------------------------+ +|attackers | In this test case, an attacker called "stress-cpu" is | +| | needed. This attacker includes two parameters: | +| | 1) fault_type: which is used for finding the attacker's | +| | scripts. It should be always set to "stress-cpu" in | +| | this test case. | +| | 2) host: which is the name of a control node being attacked. | +| | e.g. | +| | -fault_type: "stress-cpu" | +| | -host: node1 | ++--------------+--------------------------------------------------------------+ +|monitors | In this test case, the monitor named "openstack-cmd" is | +| | needed. The monitor needs needs two parameters: | +| | 1) monitor_type: which is used for finding the monitor class | +| | and related scritps. It should be always set to | +| | "openstack-cmd" for this monitor. | +| | 2) command_name: which is the command name used for request | +| | | +| | There are four instance of the "openstack-cmd" monitor: | +| | monitor1: | +| | -monitor_type: "openstack-cmd" | +| | -command_name: "nova image-list" | +| | monitor2: | +| | -monitor_type: "openstack-cmd" | +| | -command_name: "neutron router-list" | +| | monitor3: | +| | -monitor_type: "openstack-cmd" | +| | -command_name: "heat stack-list" | +| | monitor4: | +| | -monitor_type: "openstack-cmd" | +| | -command_name: "cinder list" | ++--------------+--------------------------------------------------------------+ +|metrics | In this test case, there is one metric: | +| | 1)service_outage_time: which indicates the maximum outage | +| | time (seconds) of the specified Openstack command request. | ++--------------+--------------------------------------------------------------+ +|test tool | Developed by the project. Please see folder: | +| | "yardstick/benchmark/scenarios/availability/ha_tools" | +| | | ++--------------+--------------------------------------------------------------+ +|references | ETSI NFV REL001 | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | This test case needs two configuration files: | +| | 1) test case file: opnfv_yardstick_tc051.yaml | +| | -Attackers: see above "attackers" discription | +| | -waiting_time: which is the time (seconds) from the process | +| | being killed to stoping monitors the monitors | +| | -Monitors: see above "monitors" discription | +| | -SLA: see above "metrics" discription | +| | | +| | 2)POD file: pod.yaml | +| | The POD configuration should record on pod.yaml first. | +| | the "host" item in this test case will use the node name in | +| | the pod.yaml. | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | start monitors: | +| | each monitor will run with independently process | +| | | +| | Result: The monitor info will be collected. | +| | | ++--------------+--------------------------------------------------------------+ +|step 2 | do attacker: connect the host through SSH, and then execute | +| | the stress cpu script on the host. | +| | | +| | Result: The CPU usage of the host will be stressed to 100%. | +| | | ++--------------+--------------------------------------------------------------+ +|step 3 | stop monitors after a period of time specified by | +| | "waiting_time" | +| | | +| | Result: The monitor info will be aggregated. | +| | | ++--------------+--------------------------------------------------------------+ +|step 4 | verify the SLA | +| | | +| | Result: The test case is passed or not. | +| | | ++--------------+--------------------------------------------------------------+ +|post-action | It is the action when the test cases exist. It kills the | +| | process that stresses the CPU usage. | ++--------------+--------------------------------------------------------------+ +|test verdict | Fails only if SLA is not passed, or if there is a test case | +| | execution problem. | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc052.rst b/docs/testing/user/userguide/opnfv_yardstick_tc052.rst new file mode 100644 index 000000000..9514b6819 --- /dev/null +++ b/docs/testing/user/userguide/opnfv_yardstick_tc052.rst @@ -0,0 +1,141 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Yin Kanglin and others. +.. 14_ykl@tongji.edu.cn + +************************************* +Yardstick Test Case Description TC052 +************************************* + ++-----------------------------------------------------------------------------+ +|OpenStack Controller Node Disk I/O Block High Availability | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC052: OpenStack Controller Node Disk I/O | +| | Block High Availability | ++--------------+--------------------------------------------------------------+ +|test purpose | This test case will verify the high availability of control | +| | node. When the disk I/O of a specified disk is blocked, | +| | which breaks down the Openstack services on this node. Read | +| | and write services should still be accessed by other | +| | controller nodes, and the services on failed controller node | +| | should be isolated. | ++--------------+--------------------------------------------------------------+ +|test method | This test case blocks the disk I/O of a specified control | +| | node, then checks whether the services that need to read or | +| | wirte the disk of the control node are OK with some monitor | +| | tools. | ++--------------+--------------------------------------------------------------+ +|attackers | In this test case, an attacker called "disk-block" is | +| | needed. This attacker includes two parameters: | +| | 1) fault_type: which is used for finding the attacker's | +| | scripts. It should be always set to "disk-block" in this | +| | test case. | +| | 2) host: which is the name of a control node being attacked. | +| | e.g. | +| | -fault_type: "disk-block" | +| | -host: node1 | ++--------------+--------------------------------------------------------------+ +|monitors | In this test case, two kinds of monitor are needed: | +| | 1. the "openstack-cmd" monitor constantly request a specific | +| | Openstack command, which needs two parameters: | +| | 1) monitor_type: which is used for finding the monitor class | +| | and related scripts. It should be always set to | +| | "openstack-cmd" for this monitor. | +| | 2) command_name: which is the command name used for request. | +| | | +| | e.g. | +| | -monitor_type: "openstack-cmd" | +| | -command_name: "nova flavor-list" | +| | | +| | 2. the second monitor verifies the read and write function | +| | by a "operation" and a "result checker". | +| | the "operation" have two parameters: | +| | 1) operation_type: which is used for finding the operation | +| | class and related scripts. | +| | 2) action_parameter: parameters for the operation. | +| | the "result checker" have three parameters: | +| | 1) checker_type: which is used for finding the reuslt | +| | checker class and realted scripts. | +| | 2) expectedValue: the expected value for the output of the | +| | checker script. | +| | 3) condition: whether the expected value is in the output of | +| | checker script or is totally same with the output. | +| | | +| | In this case, the "operation" adds a flavor and the "result | +| | checker" checks whether ths flavor is created. Their | +| | parameters show as follows: | +| | operation: | +| | -operation_type: "nova-create-flavor" | +| | -action_parameter: | +| | flavorconfig: "test-001 test-001 100 1 1" | +| | result checker: | +| | -checker_type: "check-flavor" | +| | -expectedValue: "test-001" | +| | -condition: "in" | ++--------------+--------------------------------------------------------------+ +|metrics | In this test case, there is one metric: | +| | 1)service_outage_time: which indicates the maximum outage | +| | time (seconds) of the specified Openstack command request. | ++--------------+--------------------------------------------------------------+ +|test tool | Developed by the project. Please see folder: | +| | "yardstick/benchmark/scenarios/availability/ha_tools" | +| | | ++--------------+--------------------------------------------------------------+ +|references | ETSI NFV REL001 | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | This test case needs two configuration files: | +| | 1) test case file: opnfv_yardstick_tc052.yaml | +| | -Attackers: see above "attackers" discription | +| | -waiting_time: which is the time (seconds) from the process | +| | being killed to stoping monitors the monitors | +| | -Monitors: see above "monitors" discription | +| | -SLA: see above "metrics" discription | +| | | +| | 2)POD file: pod.yaml | +| | The POD configuration should record on pod.yaml first. | +| | the "host" item in this test case will use the node name in | +| | the pod.yaml. | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | do attacker: connect the host through SSH, and then execute | +| | the block disk I/O script on the host. | +| | | +| | Result: The disk I/O of the host will be blocked | +| | | ++--------------+--------------------------------------------------------------+ +|step 2 | start monitors: | +| | each monitor will run with independently process | +| | | +| | Result: The monitor info will be collected. | +| | | ++--------------+--------------------------------------------------------------+ +|step 3 | do operation: add a flavor | +| | | ++--------------+--------------------------------------------------------------+ +|step 4 | do result checker: check whether the falvor is created | +| | | ++--------------+--------------------------------------------------------------+ +|step 5 | stop monitors after a period of time specified by | +| | "waiting_time" | +| | | +| | Result: The monitor info will be aggregated. | +| | | ++--------------+--------------------------------------------------------------+ +|step 6 | verify the SLA | +| | | +| | Result: The test case is passed or not. | +| | | ++--------------+--------------------------------------------------------------+ +|post-action | It is the action when the test cases exist. It excutes the | +| | release disk I/O script to release the blocked I/O. | ++--------------+--------------------------------------------------------------+ +|test verdict | Fails if monnitor SLA is not passed or the result checker is | +| | not passed, or if there is a test case execution problem. | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc053.rst b/docs/testing/user/userguide/opnfv_yardstick_tc053.rst new file mode 100644 index 000000000..3c6bbc628 --- /dev/null +++ b/docs/testing/user/userguide/opnfv_yardstick_tc053.rst @@ -0,0 +1,142 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Yin Kanglin and others. +.. 14_ykl@tongji.edu.cn + +************************************* +Yardstick Test Case Description TC053 +************************************* + ++-----------------------------------------------------------------------------+ +|OpenStack Controller Load Balance Service High Availability | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC053: OpenStack Controller Load Balance | +| | Service High Availability | ++--------------+--------------------------------------------------------------+ +|test purpose | This test case will verify the high availability of the | +| | load balance service(current is HAProxy) that supports | +| | OpenStack on controller node. When the load balance service | +| | of a specified controller node is killed, whether other load | +| | balancers on other controller nodes will work, and whether | +| | the controller node will restart the load balancer are | +| | checked. | ++--------------+--------------------------------------------------------------+ +|test method | This test case kills the processes of load balance service | +| | on a selected control node, then checks whether the request | +| | of the related Openstack command is OK and the killed | +| | processes are recovered. | ++--------------+--------------------------------------------------------------+ +|attackers | In this test case, an attacker called "kill-process" is | +| | needed. This attacker includes three parameters: | +| | 1) fault_type: which is used for finding the attacker's | +| | scripts. It should be always set to "kill-process" in this | +| | test case. | +| | 2) process_name: which is the process name of the specified | +| | OpenStack service. If there are multiple processes use the | +| | same name on the host, all of them are killed by this | +| | attacker. | +| | In this case. This parameter should always set to "swift- | +| | proxy". | +| | 3) host: which is the name of a control node being attacked. | +| | | +| | e.g. | +| | -fault_type: "kill-process" | +| | -process_name: "haproxy" | +| | -host: node1 | +| | | ++--------------+--------------------------------------------------------------+ +|monitors | In this test case, two kinds of monitor are needed: | +| | 1. the "openstack-cmd" monitor constantly request a specific | +| | Openstack command, which needs two parameters: | +| | 1) monitor_type: which is used for finding the monitor class | +| | and related scritps. It should be always set to | +| | "openstack-cmd" for this monitor. | +| | 2) command_name: which is the command name used for request. | +| | | +| | 2. the "process" monitor check whether a process is running | +| | on a specific node, which needs three parameters: | +| | 1) monitor_type: which used for finding the monitor class | +| | and related scripts. It should be always set to "process" | +| | for this monitor. | +| | 2) process_name: which is the process name for monitor | +| | 3) host: which is the name of the node runing the process | +| | In this case, the command_name of monitor1 should be | +| | services that is supported by load balancer and the process- | +| | name of monitor2 should be "haproxy", for example: | +| | | +| | e.g. | +| | monitor1: | +| | -monitor_type: "openstack-cmd" | +| | -command_name: "nova image-list" | +| | monitor2: | +| | -monitor_type: "process" | +| | -process_name: "haproxy" | +| | -host: node1 | +| | | ++--------------+--------------------------------------------------------------+ +|metrics | In this test case, there are two metrics: | +| | 1)service_outage_time: which indicates the maximum outage | +| | time (seconds) of the specified Openstack command request. | +| | 2)process_recover_time: which indicates the maximun time | +| | (seconds) from the process being killed to recovered | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | Developed by the project. Please see folder: | +| | "yardstick/benchmark/scenarios/availability/ha_tools" | +| | | ++--------------+--------------------------------------------------------------+ +|references | ETSI NFV REL001 | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | This test case needs two configuration files: | +| | 1) test case file: opnfv_yardstick_tc053.yaml | +| | -Attackers: see above "attackers" discription | +| | -waiting_time: which is the time (seconds) from the process | +| | being killed to stoping monitors the monitors | +| | -Monitors: see above "monitors" discription | +| | -SLA: see above "metrics" discription | +| | | +| | 2)POD file: pod.yaml | +| | The POD configuration should record on pod.yaml first. | +| | the "host" item in this test case will use the node name in | +| | the pod.yaml. | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | start monitors: | +| | each monitor will run with independently process | +| | | +| | Result: The monitor info will be collected. | +| | | ++--------------+--------------------------------------------------------------+ +|step 2 | do attacker: connect the host through SSH, and then execute | +| | the kill process script with param value specified by | +| | "process_name" | +| | | +| | Result: Process will be killed. | +| | | ++--------------+--------------------------------------------------------------+ +|step 3 | stop monitors after a period of time specified by | +| | "waiting_time" | +| | | +| | Result: The monitor info will be aggregated. | +| | | ++--------------+--------------------------------------------------------------+ +|step 4 | verify the SLA | +| | | +| | Result: The test case is passed or not. | +| | | ++--------------+--------------------------------------------------------------+ +|post-action | It is the action when the test cases exist. It will check | +| | the status of the specified process on the host, and restart | +| | the process if it is not running for next test cases. | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | Fails only if SLA is not passed, or if there is a test case | +| | execution problem. | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc054.rst b/docs/testing/user/userguide/opnfv_yardstick_tc054.rst new file mode 100644 index 000000000..7f92be2bc --- /dev/null +++ b/docs/testing/user/userguide/opnfv_yardstick_tc054.rst @@ -0,0 +1,125 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Yin Kanglin and others. +.. 14_ykl@tongji.edu.cn + +************************************* +Yardstick Test Case Description TC054 +************************************* + ++-----------------------------------------------------------------------------+ +|OpenStack Virtual IP High Availability | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC054: OpenStack Virtual IP High | +| | Availability | ++--------------+--------------------------------------------------------------+ +|test purpose | This test case will verify the high availability for virtual | +| | ip in the environment. When master node of virtual ip is | +| | abnormally shutdown, connection to virtual ip and | +| | the services binded to the virtual IP it should be OK. | ++--------------+--------------------------------------------------------------+ +|test method | This test case shutdowns the virtual IP master node with | +| | some fault injection tools, then checks whether virtual ips | +| | can be pinged and services binded to virtual ip are OK with | +| | some monitor tools. | ++--------------+--------------------------------------------------------------+ +|attackers | In this test case, an attacker called "control-shutdown" is | +| | needed. This attacker includes two parameters: | +| | 1) fault_type: which is used for finding the attacker's | +| | scripts. It should be always set to "control-shutdown" in | +| | this test case. | +| | 2) host: which is the name of a control node being attacked. | +| | | +| | In this case the host should be the virtual ip master node, | +| | that means the host ip is the virtual ip, for exapmle: | +| | -fault_type: "control-shutdown" | +| | -host: node1(the VIP Master node) | ++--------------+--------------------------------------------------------------+ +|monitors | In this test case, two kinds of monitor are needed: | +| | 1. the "ip_status" monitor that pings a specific ip to check | +| | the connectivity of this ip, which needs two parameters: | +| | 1) monitor_type: which is used for finding the monitor class | +| | and related scripts. It should be always set to "ip_status" | +| | for this monitor. | +| | 2) ip_address: The ip to be pinged. In this case, ip_address | +| | should be the virtual IP. | +| | | +| | 2. the "openstack-cmd" monitor constantly request a specific | +| | Openstack command, which needs two parameters: | +| | 1) monitor_type: which is used for finding the monitor class | +| | and related scripts. It should be always set to | +| | "openstack-cmd" for this monitor. | +| | 2) command_name: which is the command name used for request. | +| | | +| | e.g. | +| | monitor1: | +| | -monitor_type: "ip_status" | +| | -host: 192.168.0.2 | +| | monitor2: | +| | -monitor_type: "openstack-cmd" | +| | -command_name: "nova image-list" | +| | | ++--------------+--------------------------------------------------------------+ +|metrics | In this test case, there are two metrics: | +| | 1) ping_outage_time: which-indicates the maximum outage time | +| | to ping the specified host. | +| | 2)service_outage_time: which indicates the maximum outage | +| | time (seconds) of the specified Openstack command request. | ++--------------+--------------------------------------------------------------+ +|test tool | Developed by the project. Please see folder: | +| | "yardstick/benchmark/scenarios/availability/ha_tools" | +| | | ++--------------+--------------------------------------------------------------+ +|references | ETSI NFV REL001 | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | This test case needs two configuration files: | +| | 1) test case file: opnfv_yardstick_tc054.yaml | +| | -Attackers: see above "attackers" discription | +| | -waiting_time: which is the time (seconds) from the process | +| | being killed to stoping monitors the monitors | +| | -Monitors: see above "monitors" discription | +| | -SLA: see above "metrics" discription | +| | | +| | 2)POD file: pod.yaml | +| | The POD configuration should record on pod.yaml first. | +| | the "host" item in this test case will use the node name in | +| | the pod.yaml. | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | start monitors: | +| | each monitor will run with independently process | +| | | +| | Result: The monitor info will be collected. | +| | | ++--------------+--------------------------------------------------------------+ +|step 2 | do attacker: connect the host through SSH, and then execute | +| | the shutdown script on the VIP master node. | +| | | +| | Result: VIP master node will be shutdown | +| | | ++--------------+--------------------------------------------------------------+ +|step 3 | stop monitors after a period of time specified by | +| | "waiting_time" | +| | | +| | Result: The monitor info will be aggregated. | +| | | ++--------------+--------------------------------------------------------------+ +|step 4 | verify the SLA | +| | | +| | Result: The test case is passed or not. | +| | | ++--------------+--------------------------------------------------------------+ +|post-action | It is the action when the test cases exist. It restarts the | +| | original VIP master node if it is not restarted. | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | Fails only if SLA is not passed, or if there is a test case | +| | execution problem. | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc055.rst b/docs/testing/user/userguide/opnfv_yardstick_tc055.rst new file mode 100644 index 000000000..c861ca90c --- /dev/null +++ b/docs/testing/user/userguide/opnfv_yardstick_tc055.rst @@ -0,0 +1,67 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Huawei Technologies Co.,Ltd and others. + +************************************* +Yardstick Test Case Description TC055 +************************************* + +.. _/proc/cpuinfo: http://www.linfo.org/proc_cpuinfo.html + ++-----------------------------------------------------------------------------+ +|Compute Capacity | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC055_Compute Capacity | +| | | ++--------------+--------------------------------------------------------------+ +|metric | Number of cpus, number of cores, number of threads, available| +| | memory size and total cache size. | +| | | ++--------------+--------------------------------------------------------------+ +|test purpose | To evaluate the IaaS compute capacity with regards to | +| | hardware specification, including number of cpus, number of | +| | cores, number of threads, available memory size and total | +| | cache size. | +| | Test results, graphs and similar shall be stored for | +| | comparison reasons and product evolution understanding | +| | between different OPNFV versions and/or configurations. | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | file: opnfv_yardstick_tc055.yaml | +| | | +| | There is are no additional configurations to be set for this | +| | TC. | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | /proc/cpuinfo | +| | | +| | this TC uses /proc/cpuinfo as source to produce compute | +| | capacity output. | +| | | ++--------------+--------------------------------------------------------------+ +|references | /proc/cpuinfo_ | +| | | +| | ETSI-NFV-TST001 | +| | | ++--------------+--------------------------------------------------------------+ +|applicability | None. | +| | | ++--------------+--------------------------------------------------------------+ +|pre-test | No POD specific requirements have been identified. | +|conditions | | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | The hosts are installed, TC is invoked and logs are produced | +| | and stored. | +| | | +| | Result: Logs are stored. | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | None. Hardware specification are fetched and stored. | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc061.rst b/docs/testing/user/userguide/opnfv_yardstick_tc061.rst new file mode 100644 index 000000000..1d424414e --- /dev/null +++ b/docs/testing/user/userguide/opnfv_yardstick_tc061.rst @@ -0,0 +1,88 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Huawei Technologies Co.,Ltd and others. + +************************************* +Yardstick Test Case Description TC061 +************************************* + +.. _man-pages: http://linux.die.net/man/1/sar + ++-----------------------------------------------------------------------------+ +|Network Utilization | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC061_Network Utilization | +| | | ++--------------+--------------------------------------------------------------+ +|metric | Network utilization | +| | | ++--------------+--------------------------------------------------------------+ +|test purpose | To evaluate the IaaS network capability with regards to | +| | network utilization, including Total number of packets | +| | received per second, Total number of packets transmitted per | +| | second, Total number of kilobytes received per second, Total | +| | number of kilobytes transmitted per second, Number of | +| | compressed packets received per second (for cslip etc.), | +| | Number of compressed packets transmitted per second, Number | +| | of multicast packets received per second, Utilization | +| | percentage of the network interface. | +| | This test case should be run in parallel to other Yardstick | +| | test cases and not run as a stand-alone test case. | +| | Measure the network usage statistics from the network devices| +| | Average, minimum and maximun values are obtained. | +| | The purpose is also to be able to spot trends. | +| | Test results, graphs and similar shall be stored for | +| | comparison reasons and product evolution understanding | +| | between different OPNFV versions and/or configurations. | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | File: netutilization.yaml (in the 'samples' directory) | +| | | +| | * interval: 1 - repeat, pausing every 1 seconds in-between. | +| | * count: 1 - display statistics 1 times, then exit. | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | sar | +| | | +| | The sar command writes to standard output the contents of | +| | selected cumulative activity counters in the operating | +| | system. | +| | sar is normally part of a Linux distribution, hence it | +| | doesn't needs to be installed. | +| | | ++--------------+--------------------------------------------------------------+ +|references | man-pages_ | +| | | +| | ETSI-NFV-TST001 | +| | | ++--------------+--------------------------------------------------------------+ +|applicability | Test can be configured with different: | +| | | +| | * interval; | +| | * count; | +| | * runner Iteration and intervals. | +| | | +| | There are default values for each above-mentioned option. | +| | Run in background with other test cases. | +| | | ++--------------+--------------------------------------------------------------+ +|pre-test | The test case image needs to be installed into Glance | +|conditions | with sar included in the image. | +| | | +| | No POD specific requirements have been identified. | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result. | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | The host is installed as client. The related TC, or TCs, is | +| | invoked and sar logs are produced and stored. | +| | | +| | Result: logs are stored. | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | None. Network utilization results are fetched and stored. | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc063.rst b/docs/testing/user/userguide/opnfv_yardstick_tc063.rst new file mode 100644 index 000000000..a77653aa5 --- /dev/null +++ b/docs/testing/user/userguide/opnfv_yardstick_tc063.rst @@ -0,0 +1,81 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Huawei Technologies Co.,Ltd and others. + +************************************* +Yardstick Test Case Description TC063 +************************************* + +.. _iostat: http://linux.die.net/man/1/iostat +.. _fdisk: http://www.tldp.org/HOWTO/Partition/fdisk_partitioning.html + ++-----------------------------------------------------------------------------+ +|Storage Capacity | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC063_Storage Capacity | +| | | ++--------------+--------------------------------------------------------------+ +|metric | Storage/disk size, block size | +| | Disk Utilization | ++--------------+--------------------------------------------------------------+ +|test purpose | This test case will check the parameters which could decide | +| | several models and each model has its specified task to | +| | measure. The test purposes are to measure disk size, block | +| | size and disk utilization. With the test results, we could | +| | evaluate the storage capacity of the host. | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | file: opnfv_yardstick_tc063.yaml | +| | | +| |* test_type: "disk_size" | +| |* runner: | +| | type: Iteration | +| | iterations: 1 - test is run 1 time iteratively. | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | fdisk | +| | A command-line utility that provides disk partitioning | +| | functions | +| | | +| | iostat | +| | This is a computer system monitor tool used to collect and | +| | show operating system storage input and output statistics. | ++--------------+--------------------------------------------------------------+ +|references | iostat_ | +| | fdisk_ | +| | | +| | ETSI-NFV-TST001 | +| | | ++--------------+--------------------------------------------------------------+ +|applicability | Test can be configured with different: | +| | | +| | * test_type: "disk size", "block size", "disk utilization" | +| | * interval: 1 - how ofter to stat disk utilization | +| | type: int | +| | unit: seconds | +| | * count: 15 - how many times to stat disk utilization | +| | type: int | +| | unit: na | +| | There are default values for each above-mentioned option. | +| | Run in background with other test cases. | +| | | ++--------------+--------------------------------------------------------------+ +|pre-test | The test case image needs to be installed into Glance | +|conditions | | +| | No POD specific requirements have been identified. | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | Output the specific storage capacity of disk information as | +| | the sequence into file. | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | The pod is available and the hosts are installed. Node5 is | +| | used and logs are produced and stored. | +| | | +| | Result: Logs are stored. | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | None. | ++--------------+--------------------------------------------------------------+ diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc069.rst b/docs/testing/user/userguide/opnfv_yardstick_tc069.rst new file mode 100644 index 000000000..af0e64fbf --- /dev/null +++ b/docs/testing/user/userguide/opnfv_yardstick_tc069.rst @@ -0,0 +1,100 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Huawei Technologies Co.,Ltd and others. + +************************************* +Yardstick Test Case Description TC069 +************************************* + +.. _RAMspeed: http://alasir.com/software/ramspeed/ + +.. table:: + :class: longtable + ++-----------------------------------------------------------------------------+ +|Memory Bandwidth | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC069_Memory Bandwidth | +| | | ++--------------+--------------------------------------------------------------+ +|metric | Megabyte per second (MBps) | +| | | ++--------------+--------------------------------------------------------------+ +|test purpose | To evaluate the IaaS compute performance with regards to | +| | memory bandwidth. | +| | Measure the maximum possible cache and memory performance | +| | while reading and writing certain blocks of data (starting | +| | from 1Kb and further in power of 2) continuously through ALU | +| | and FPU respectively. | +| | Measure different aspects of memory performance via | +| | synthetic simulations. Each simulation consists of four | +| | performances (Copy, Scale, Add, Triad). | +| | Test results, graphs and similar shall be stored for | +| | comparison reasons and product evolution understanding | +| | between different OPNFV versions and/or configurations. | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | File: opnfv_yardstick_tc069.yaml | +| | | +| | * SLA (optional): 7000 (MBps) min_bandwidth: The minimum | +| | amount of memory bandwidth that is accepted. | +| | * type_id: 1 - runs a specified benchmark | +| | (by an ID number): | +| | 1 -- INTmark [writing] 4 -- FLOATmark [writing] | +| | 2 -- INTmark [reading] 5 -- FLOATmark [reading] | +| | 3 -- INTmem 6 -- FLOATmem | +| | * block_size: 64 Megabytes - the maximum block | +| | size per array. | +| | * load: 32 Gigabytes - the amount of data load per pass. | +| | * iterations: 5 - test is run 5 times iteratively. | +| | * interval: 1 - there is 1 second delay between each | +| | iteration. | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | RAMspeed | +| | | +| | RAMspeed is a free open source command line utility to | +| | measure cache and memory performance of computer systems. | +| | RAMspeed is not always part of a Linux distribution, hence | +| | it needs to be installed in the test image. | +| | | ++--------------+--------------------------------------------------------------+ +|references | RAMspeed_ | +| | | +| | ETSI-NFV-TST001 | +| | | ++--------------+--------------------------------------------------------------+ +|applicability | Test can be configured with different: | +| | | +| | * benchmark operations (such as INTmark [writing], | +| | INTmark [reading], FLOATmark [writing], | +| | FLOATmark [reading], INTmem, FLOATmem); | +| | * block size per array; | +| | * load per pass; | +| | * number of batch run iterations; | +| | * iterations and intervals. | +| | | +| | There are default values for each above-mentioned option. | +| | | ++--------------+--------------------------------------------------------------+ +|pre-test | The test case image needs to be installed into Glance | +|conditions | with RAmspeed included in the image. | +| | | +| | No POD specific requirements have been identified. | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | The host is installed as client. RAMspeed is invoked and | +| | logs are produced and stored. | +| | | +| | Result: logs are stored. | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | Test fails if the measured memory bandwidth is below the SLA | +| | value or if there is a test case execution problem. | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc070.rst b/docs/testing/user/userguide/opnfv_yardstick_tc070.rst new file mode 100644 index 000000000..64fcc0c91 --- /dev/null +++ b/docs/testing/user/userguide/opnfv_yardstick_tc070.rst @@ -0,0 +1,110 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Huawei Technologies Co.,Ltd and others. + +************************************* +Yardstick Test Case Description TC070 +************************************* + +.. _cirros: https://download.cirros-cloud.net +.. _pktgen: https://www.kernel.org/doc/Documentation/networking/pktgen.txt +.. _free: http://manpages.ubuntu.com/manpages/trusty/en/man1/free.1.html + ++-----------------------------------------------------------------------------+ +|Latency, Memory Utilization, Throughput, Packet Loss | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC070_Latency, Memory Utilization, | +| | Throughput,Packet Loss | +| | | ++--------------+--------------------------------------------------------------+ +|metric | Number of flows, latency, throughput, Memory Utilization, | +| | packet loss | +| | | ++--------------+--------------------------------------------------------------+ +|test purpose | To evaluate the IaaS network performance with regards to | +| | flows and throughput, such as if and how different amounts | +| | of flows matter for the throughput between hosts on different| +| | compute blades. Typically e.g. the performance of a vSwitch | +| | depends on the number of flows running through it. Also | +| | performance of other equipment or entities can depend | +| | on the number of flows or the packet sizes used. | +| | The purpose is also to be able to spot trends. | +| | Test results, graphs and similar shall be stored for | +| | comparison reasons and product evolution understanding | +| | between different OPNFV versions and/or configurations. | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | file: opnfv_yardstick_tc070.yaml | +| | | +| | Packet size: 64 bytes | +| | Number of ports: 1, 10, 50, 100, 300, 500, 750 and 1000. | +| | The amount configured ports map from 2 up to 1001000 flows, | +| | respectively. Each port amount is run two times, for 20 | +| | seconds each. Then the next port_amount is run, and so on. | +| | During the test Memory Utilization on both client and server,| +| | and the network latency between the client and server are | +| | measured. | +| | The client and server are distributed on different HW. | +| | For SLA max_ppm is set to 1000. | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | pktgen | +| | | +| | Pktgen is not always part of a Linux distribution, hence it | +| | needs to be installed. It is part of the Yardstick Glance | +| | image. | +| | (As an example see the /yardstick/tools/ directory for how | +| | to generate a Linux image with pktgen included.) | +| | | +| | ping | +| | | +| | Ping is normally part of any Linux distribution, hence it | +| | doesn't need to be installed. It is also part of the | +| | Yardstick Glance image. | +| | (For example also a cirros_ image can be downloaded, it | +| | includes ping) | +| | | +| | free | +| | | +| | free provides information about unused and used memory and | +| | swap space on any computer running Linux or another Unix-like| +| | operating system. | +| | free is normally part of a Linux distribution, hence it | +| | doesn't needs to be installed. | +| | | ++--------------+--------------------------------------------------------------+ +|references | Ping and free man pages | +| | | +| | pktgen_ | +| | | +| | ETSI-NFV-TST001 | +| | | ++--------------+--------------------------------------------------------------+ +|applicability | Test can be configured with different packet sizes, amount | +| | of flows and test duration. Default values exist. | +| | | +| | SLA (optional): max_ppm: The number of packets per million | +| | packets sent that are acceptable to lose, not received. | +| | | ++--------------+--------------------------------------------------------------+ +|pre-test | The test case image needs to be installed into Glance | +|conditions | with pktgen included in it. | +| | | +| | No POD specific requirements have been identified. | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | The hosts are installed, as server and client. pktgen is | +| | invoked and logs are produced and stored. | +| | | +| | Result: Logs are stored. | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | Fails only if SLA is not passed, or if there is a test case | +| | execution problem. | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc071.rst b/docs/testing/user/userguide/opnfv_yardstick_tc071.rst new file mode 100644 index 000000000..673480b55 --- /dev/null +++ b/docs/testing/user/userguide/opnfv_yardstick_tc071.rst @@ -0,0 +1,109 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Huawei Technologies Co.,Ltd and others. + +************************************* +Yardstick Test Case Description TC071 +************************************* + +.. _cirros: https://download.cirros-cloud.net +.. _pktgen: https://www.kernel.org/doc/Documentation/networking/pktgen.txt +.. _cachestat: https://github.com/brendangregg/perf-tools/tree/master/fs + ++-----------------------------------------------------------------------------+ +|Latency, Cache Utilization, Throughput, Packet Loss | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC071_Latency, Cache Utilization, | +| | Throughput,Packet Loss | +| | | ++--------------+--------------------------------------------------------------+ +|metric | Number of flows, latency, throughput, Cache Utilization, | +| | packet loss | +| | | ++--------------+--------------------------------------------------------------+ +|test purpose | To evaluate the IaaS network performance with regards to | +| | flows and throughput, such as if and how different amounts | +| | of flows matter for the throughput between hosts on different| +| | compute blades. Typically e.g. the performance of a vSwitch | +| | depends on the number of flows running through it. Also | +| | performance of other equipment or entities can depend | +| | on the number of flows or the packet sizes used. | +| | The purpose is also to be able to spot trends. | +| | Test results, graphs and similar shall be stored for | +| | comparison reasons and product evolution understanding | +| | between different OPNFV versions and/or configurations. | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | file: opnfv_yardstick_tc071.yaml | +| | | +| | Packet size: 64 bytes | +| | Number of ports: 1, 10, 50, 100, 300, 500, 750 and 1000. | +| | The amount configured ports map from 2 up to 1001000 flows, | +| | respectively. Each port amount is run two times, for 20 | +| | seconds each. Then the next port_amount is run, and so on. | +| | During the test Cache Utilization on both client and server, | +| | and the network latency between the client and server are | +| | measured. | +| | The client and server are distributed on different HW. | +| | For SLA max_ppm is set to 1000. | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | pktgen | +| | | +| | Pktgen is not always part of a Linux distribution, hence it | +| | needs to be installed. It is part of the Yardstick Glance | +| | image. | +| | (As an example see the /yardstick/tools/ directory for how | +| | to generate a Linux image with pktgen included.) | +| | | +| | ping | +| | | +| | Ping is normally part of any Linux distribution, hence it | +| | doesn't need to be installed. It is also part of the | +| | Yardstick Glance image. | +| | (For example also a cirros_ image can be downloaded, it | +| | includes ping) | +| | | +| | cachestat | +| | | +| | cachestat is not always part of a Linux distribution, hence | +| | it needs to be installed. | +| | | ++--------------+--------------------------------------------------------------+ +|references | Ping man pages | +| | | +| | pktgen_ | +| | | +| | cachestat_ | +| | | +| | ETSI-NFV-TST001 | +| | | ++--------------+--------------------------------------------------------------+ +|applicability | Test can be configured with different packet sizes, amount | +| | of flows and test duration. Default values exist. | +| | | +| | SLA (optional): max_ppm: The number of packets per million | +| | packets sent that are acceptable to lose, not received. | +| | | ++--------------+--------------------------------------------------------------+ +|pre-test | The test case image needs to be installed into Glance | +|conditions | with pktgen included in it. | +| | | +| | No POD specific requirements have been identified. | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | The hosts are installed, as server and client. pktgen is | +| | invoked and logs are produced and stored. | +| | | +| | Result: Logs are stored. | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | Fails only if SLA is not passed, or if there is a test case | +| | execution problem. | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc072.rst b/docs/testing/user/userguide/opnfv_yardstick_tc072.rst new file mode 100644 index 000000000..2e7ee057c --- /dev/null +++ b/docs/testing/user/userguide/opnfv_yardstick_tc072.rst @@ -0,0 +1,110 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Huawei Technologies Co.,Ltd and others. + +************************************* +Yardstick Test Case Description TC072 +************************************* + +.. _cirros: https://download.cirros-cloud.net +.. _pktgen: https://www.kernel.org/doc/Documentation/networking/pktgen.txt +.. _sar: http://linux.die.net/man/1/sar + ++-----------------------------------------------------------------------------+ +|Latency, Network Utilization, Throughput, Packet Loss | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC072_Latency, Network Utilization, | +| | Throughput,Packet Loss | +| | | ++--------------+--------------------------------------------------------------+ +|metric | Number of flows, latency, throughput, Network Utilization, | +| | packet loss | +| | | ++--------------+--------------------------------------------------------------+ +|test purpose | To evaluate the IaaS network performance with regards to | +| | flows and throughput, such as if and how different amounts | +| | of flows matter for the throughput between hosts on different| +| | compute blades. Typically e.g. the performance of a vSwitch | +| | depends on the number of flows running through it. Also | +| | performance of other equipment or entities can depend | +| | on the number of flows or the packet sizes used. | +| | The purpose is also to be able to spot trends. | +| | Test results, graphs and similar shall be stored for | +| | comparison reasons and product evolution understanding | +| | between different OPNFV versions and/or configurations. | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | file: opnfv_yardstick_tc072.yaml | +| | | +| | Packet size: 64 bytes | +| | Number of ports: 1, 10, 50, 100, 300, 500, 750 and 1000. | +| | The amount configured ports map from 2 up to 1001000 flows, | +| | respectively. Each port amount is run two times, for 20 | +| | seconds each. Then the next port_amount is run, and so on. | +| | During the test Network Utilization on both client and | +| | server, and the network latency between the client and server| +| | are measured. | +| | The client and server are distributed on different HW. | +| | For SLA max_ppm is set to 1000. | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | pktgen | +| | | +| | Pktgen is not always part of a Linux distribution, hence it | +| | needs to be installed. It is part of the Yardstick Glance | +| | image. | +| | (As an example see the /yardstick/tools/ directory for how | +| | to generate a Linux image with pktgen included.) | +| | | +| | ping | +| | | +| | Ping is normally part of any Linux distribution, hence it | +| | doesn't need to be installed. It is also part of the | +| | Yardstick Glance image. | +| | (For example also a cirros_ image can be downloaded, it | +| | includes ping) | +| | | +| | sar | +| | | +| | The sar command writes to standard output the contents of | +| | selected cumulative activity counters in the operating | +| | system. | +| | sar is normally part of a Linux distribution, hence it | +| | doesn't needs to be installed. | +| | | ++--------------+--------------------------------------------------------------+ +|references | Ping and sar man pages | +| | | +| | pktgen_ | +| | | +| | ETSI-NFV-TST001 | +| | | ++--------------+--------------------------------------------------------------+ +|applicability | Test can be configured with different packet sizes, amount | +| | of flows and test duration. Default values exist. | +| | | +| | SLA (optional): max_ppm: The number of packets per million | +| | packets sent that are acceptable to lose, not received. | +| | | ++--------------+--------------------------------------------------------------+ +|pre-test | The test case image needs to be installed into Glance | +|conditions | with pktgen included in it. | +| | | +| | No POD specific requirements have been identified. | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | The hosts are installed, as server and client. pktgen is | +| | invoked and logs are produced and stored. | +| | | +| | Result: Logs are stored. | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | Fails only if SLA is not passed, or if there is a test case | +| | execution problem. | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc073.rst b/docs/testing/user/userguide/opnfv_yardstick_tc073.rst new file mode 100644 index 000000000..ad4526405 --- /dev/null +++ b/docs/testing/user/userguide/opnfv_yardstick_tc073.rst @@ -0,0 +1,81 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Huawei Technologies Co.,Ltd and others. + +************************************* +Yardstick Test Case Description TC073 +************************************* + +.. _netperf: http://www.netperf.org/netperf/training/Netperf.html + ++-----------------------------------------------------------------------------+ +|Throughput per NFVI node test | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC073_Network latency and throughput between | +| | nodes | +| | | ++--------------+--------------------------------------------------------------+ +|metric | Network latency and throughput | +| | | ++--------------+--------------------------------------------------------------+ +|test purpose | To evaluate the IaaS network performance with regards to | +| | flows and throughput, such as if and how different amounts | +| | of packet sizes and flows matter for the throughput between | +| | nodes in one pod. | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | file: opnfv_yardstick_tc073.yaml | +| | | +| | Packet size: default 1024 bytes. | +| | | +| | Test length: default 20 seconds. | +| | | +| | The client and server are distributed on different nodes. | +| | | +| | For SLA max_mean_latency is set to 100. | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | netperf_ | +| | Netperf is a software application that provides network | +| | bandwidth testing between two hosts on a network. It | +| | supports Unix domain sockets, TCP, SCTP, DLPI and UDP via | +| | BSD Sockets. Netperf provides a number of predefined tests | +| | e.g. to measure bulk (unidirectional) data transfer or | +| | request response performance. | +| | (netperf is not always part of a Linux distribution, hence | +| | it needs to be installed.) | +| | | ++--------------+--------------------------------------------------------------+ +|references | netperf Man pages | +| | ETSI-NFV-TST001 | +| | | ++--------------+--------------------------------------------------------------+ +|applicability | Test can be configured with different packet sizes and | +| | test duration. Default values exist. | +| | | +| | SLA (optional): max_mean_latency | +| | | ++--------------+--------------------------------------------------------------+ +|pre-test | The POD can be reached by external ip and logged on via ssh | +|conditions | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | Install netperf tool on each specified node, one is as the | +| | server, and the other as the client. | +| | | ++--------------+--------------------------------------------------------------+ +|step 2 | Log on to the client node and use the netperf command to | +| | execute the network performance test | +| | | ++--------------+--------------------------------------------------------------+ +|step 3 | The throughput results stored. | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | Fails only if SLA is not passed, or if there is a test case | +| | execution problem. | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc074.rst b/docs/testing/user/userguide/opnfv_yardstick_tc074.rst new file mode 100644 index 000000000..92cd51439 --- /dev/null +++ b/docs/testing/user/userguide/opnfv_yardstick_tc074.rst @@ -0,0 +1,137 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Huawei Technologies Co.,Ltd and others. + +************************************* +Yardstick Test Case Description TC074 +************************************* + +.. _Storperf: https://wiki.opnfv.org/display/storperf/Storperf + ++-----------------------------------------------------------------------------+ +|Storperf | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC074_Storperf | +| | | ++--------------+--------------------------------------------------------------+ +|metric | Storage performance | +| | | ++--------------+--------------------------------------------------------------+ +|test purpose | Storperf integration with yardstick. The purpose of StorPerf | +| | is to provide a tool to measure block and object storage | +| | performance in an NFVI. When complemented with a | +| | characterization of typical VF storage performance | +| | requirements, it can provide pass/fail thresholds for test, | +| | staging, and production NFVI environments. | +| | | +| | The benchmarks developed for block and object storage will | +| | be sufficiently varied to provide a good preview of expected | +| | storage performance behavior for any type of VNF workload. | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | file: opnfv_yardstick_tc074.yaml | +| | | +| | * agent_count: 1 - the number of VMs to be created | +| | * agent_image: "Ubuntu-14.04" - image used for creating VMs | +| | * public_network: "ext-net" - name of public network | +| | * volume_size: 2 - cinder volume size | +| | * block_sizes: "4096" - data block size | +| | * queue_depths: "4" | +| | * StorPerf_ip: "192.168.200.2" | +| | * query_interval: 10 - state query interval | +| | * timeout: 600 - maximum allowed job time | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | Storperf_ | +| | | +| | StorPerf is a tool to measure block and object storage | +| | performance in an NFVI. | +| | | +| | StorPerf is delivered as a Docker container from | +| | https://hub.docker.com/r/opnfv/storperf/tags/. | +| | | ++--------------+--------------------------------------------------------------+ +|references | Storperf_ | +| | | +| | ETSI-NFV-TST001 | +| | | ++--------------+--------------------------------------------------------------+ +|applicability | Test can be configured with different: | +| | | +| | * agent_count | +| | * volume_size | +| | * block_sizes | +| | * queue_depths | +| | * query_interval | +| | * timeout | +| | * target=[device or path] | +| | The path to either an attached storage device | +| | (/dev/vdb, etc) or a directory path (/opt/storperf) that | +| | will be used to execute the performance test. In the case | +| | of a device, the entire device will be used. If not | +| | specified, the current directory will be used. | +| | * workload=[workload module] | +| | If not specified, the default is to run all workloads. The | +| | workload types are: | +| | - rs: 100% Read, sequential data | +| | - ws: 100% Write, sequential data | +| | - rr: 100% Read, random access | +| | - wr: 100% Write, random access | +| | - rw: 70% Read / 30% write, random access | +| | * nossd: Do not perform SSD style preconditioning. | +| | * nowarm: Do not perform a warmup prior to | +| | measurements. | +| | * report= [job_id] | +| | Query the status of the supplied job_id and report on | +| | metrics. If a workload is supplied, will report on only | +| | that subset. | +| | | +| | There are default values for each above-mentioned option. | +| | | ++--------------+--------------------------------------------------------------+ +|pre-test | If you do not have an Ubuntu 14.04 image in Glance, you will | +|conditions | need to add one. A key pair for launching agents is also | +| | required. | +| | | +| | Storperf is required to be installed in the environment. | +| | There are two possible methods for Storperf installation: | +| | Run container on Jump Host | +| | Run container in a VM | +| | | +| | Running StorPerf on Jump Host | +| | Requirements: | +| | - Docker must be installed | +| | - Jump Host must have access to the OpenStack Controller | +| | API | +| | - Jump Host must have internet connectivity for | +| | downloading docker image | +| | - Enough floating IPs must be available to match your | +| | agent count | +| | | +| | Running StorPerf in a VM | +| | Requirements: | +| | - VM has docker installed | +| | - VM has OpenStack Controller credentials and can | +| | communicate with the Controller API | +| | - VM has internet connectivity for downloading the | +| | docker image | +| | - Enough floating IPs must be available to match your | +| | agent count | +| | | +| | No POD specific requirements have been identified. | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | The Storperf is installed and Ubuntu 14.04 image is stored | +| | in glance. TC is invoked and logs are produced and stored. | +| | | +| | Result: Logs are stored. | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | None. Storage performance results are fetched and stored. | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc075.rst b/docs/testing/user/userguide/opnfv_yardstick_tc075.rst new file mode 100644 index 000000000..a6ff34447 --- /dev/null +++ b/docs/testing/user/userguide/opnfv_yardstick_tc075.rst @@ -0,0 +1,60 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Huawei Technologies Co.,Ltd and others. + +************************************* +Yardstick Test Case Description TC075 +************************************* + + ++-----------------------------------------------------------------------------+ +|Network Capacity and Scale Testing | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC075_Network_Capacity_and_Scale_testing | +| | | ++--------------+--------------------------------------------------------------+ +|metric | Number of connections, Number of frames sent/received | +| | | ++--------------+--------------------------------------------------------------+ +|test purpose | To evaluate the network capacity and scale with regards to | +| | connections and frmaes. | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | file: opnfv_yardstick_tc075.yaml | +| | | +| | There is no additional configuration to be set for this TC. | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | netstar | +| | | +| | Netstat is normally part of any Linux distribution, hence it | +| | doesn't need to be installed. | +| | | ++--------------+--------------------------------------------------------------+ +|references | Netstat man page | +| | | +| | ETSI-NFV-TST001 | +| | | ++--------------+--------------------------------------------------------------+ +|applicability | This test case is mainly for evaluating network performance. | +| | | ++--------------+--------------------------------------------------------------+ +|pre_test | Each pod node must have netstat included in it. | +|conditions | | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | The pod is available. | +| | Netstat is invoked and logs are produced and stored. | +| | | +| | Result: Logs are stored. | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | None. Number of connections and frames are fetched and | +| | stored. | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc076.rst b/docs/testing/user/userguide/opnfv_yardstick_tc076.rst new file mode 100644 index 000000000..ac7bde794 --- /dev/null +++ b/docs/testing/user/userguide/opnfv_yardstick_tc076.rst @@ -0,0 +1,61 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Huawei Technologies Co.,Ltd and others. + +************************************* +Yardstick Test Case Description TC076 +************************************* + + ++-----------------------------------------------------------------------------+ +|Monitor Network Metrics | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC076_Monitor_Network_Metrics | +| | | ++--------------+--------------------------------------------------------------+ +|metric | IP datagram error rate, ICMP message error rate, | +| | TCP segment error rate and UDP datagram error rate | +| | | ++--------------+--------------------------------------------------------------+ +|test purpose | Monitor network metrics provided by the kernel in a host and | +| | calculate IP datagram error rate, ICMP message error rate, | +| | TCP segment error rate and UDP datagram error rate. | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | file: opnfv_yardstick_tc076.yaml | +| | | +| | There is no additional configuration to be set for this TC. | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | nstat | +| | | +| | nstat is a simple tool to monitor kernel snmp counters and | +| | network interface statistics. | +| | | ++--------------+--------------------------------------------------------------+ +|references | nstat man page | +| | | +| | ETSI-NFV-TST001 | +| | | ++--------------+--------------------------------------------------------------+ +|applicability | This test case is mainly for monitoring network metrics. | +| | | ++--------------+--------------------------------------------------------------+ +|pre_test | | +|conditions | | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | The pod is available. | +| | Nstat is invoked and logs are produced and stored. | +| | | +| | Result: Logs are stored. | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | None. | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/testing/user/userguide/references.rst b/docs/testing/user/userguide/references.rst new file mode 100644 index 000000000..05729ba75 --- /dev/null +++ b/docs/testing/user/userguide/references.rst @@ -0,0 +1,60 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Ericsson AB and others. + +========== +References +========== + + +OPNFV +===== + +* Parser wiki: https://wiki.opnfv.org/parser +* Pharos wiki: https://wiki.opnfv.org/pharos +* VTC: https://wiki.opnfv.org/vtc +* Yardstick CI: https://build.opnfv.org/ci/view/yardstick/ +* Yardstick and ETSI TST001 presentation: https://wiki.opnfv.org/display/yardstick/Yardstick?preview=%2F2925202%2F2925205%2Fopnfv_summit_-_bridging_opnfv_and_etsi.pdf +* Yardstick Project presentation: https://wiki.opnfv.org/display/yardstick/Yardstick?preview=%2F2925202%2F2925208%2Fopnfv_summit_-_yardstick_project.pdf +* Yardstick wiki: https://wiki.opnfv.org/yardstick + +References used in Test Cases +============================= + +* cachestat: https://github.com/brendangregg/perf-tools/tree/master/fs +* cirros-image: https://download.cirros-cloud.net +* cyclictest: https://rt.wiki.kernel.org/index.php/Cyclictest +* DPDKpktgen: https://github.com/Pktgen/Pktgen-DPDK/ +* DPDK supported NICs: http://dpdk.org/doc/nics +* fdisk: http://www.tldp.org/HOWTO/Partition/fdisk_partitioning.html +* fio: http://www.bluestop.org/fio/HOWTO.txt +* free: http://manpages.ubuntu.com/manpages/trusty/en/man1/free.1.html +* iperf3: https://iperf.fr/ +* iostat: http://linux.die.net/man/1/iostat +* Lmbench man-pages: http://manpages.ubuntu.com/manpages/trusty/lat_mem_rd.8.html +* Memory bandwidth man-pages: http://manpages.ubuntu.com/manpages/trusty/bw_mem.8.html +* mpstat man-pages: http://manpages.ubuntu.com/manpages/trusty/man1/mpstat.1.html +* netperf: http://www.netperf.org/netperf/training/Netperf.html +* pktgen: https://www.kernel.org/doc/Documentation/networking/pktgen.txt +* RAMspeed: http://alasir.com/software/ramspeed/ +* sar: http://linux.die.net/man/1/sar +* SR-IOV: https://wiki.openstack.org/wiki/SR-IOV-Passthrough-For-Networking +* Storperf: https://wiki.opnfv.org/display/storperf/Storperf +* unixbench: https://github.com/kdlucas/byte-unixbench/blob/master/UnixBench + + +Research +======== + +* NCSRD: http://www.demokritos.gr/?lang=en +* T-NOVA: http://www.t-nova.eu/ +* T-NOVA Results: http://www.t-nova.eu/results/ + +Standards +========= + +* ETSI NFV: http://www.etsi.org/technologies-clusters/technologies/nfv +* ETSI GS-NFV TST 001: http://www.etsi.org/deliver/etsi_gs/NFV-TST/001_099/001/01.01.01_60/gs_NFV-TST001v010101p.pdf +* RFC2544: https://www.ietf.org/rfc/rfc2544.txt + diff --git a/docs/testing/user/userguide/testcase_description_v2_template.rst b/docs/testing/user/userguide/testcase_description_v2_template.rst new file mode 100644 index 000000000..91c2a7e33 --- /dev/null +++ b/docs/testing/user/userguide/testcase_description_v2_template.rst @@ -0,0 +1,64 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Ericsson AB and others. + +************************************* +Yardstick Test Case Description TCXXX +************************************* + ++-----------------------------------------------------------------------------+ +|test case slogan e.g. Network Latency | +| | ++--------------+--------------------------------------------------------------+ +|test case id | e.g. OPNFV_YARDSTICK_TC001_NW Latency | +| | | ++--------------+--------------------------------------------------------------+ +|metric | what will be measured, e.g. latency | +| | | ++--------------+--------------------------------------------------------------+ +|test purpose | describe what is the purpose of the test case | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | what .yaml file to use, state SLA if applicable, state | +| | test duration, list and describe the scenario options used in| +| | this TC and also list the options using default values. | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | e.g. ping | +| | | ++--------------+--------------------------------------------------------------+ +|references | e.g. RFCxxx, ETSI-NFVyyy | +| | | ++--------------+--------------------------------------------------------------+ +|applicability | describe variations of the test case which can be | +| | performend, e.g. run the test for different packet sizes | +| | | ++--------------+--------------------------------------------------------------+ +|pre-test | describe configuration in the tool(s) used to perform | +|conditions | the measurements (e.g. fio, pktgen), POD-specific | +| | configuration required to enable running the test | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | use this to describe tests that require sveveral steps e.g | +| | collect logs. | +| | | +| | Result: what happens in this step e.g. logs collected | +| | | ++--------------+--------------------------------------------------------------+ +|step 2 | remove interface | +| | | +| | Result: interface down. | +| | | ++--------------+--------------------------------------------------------------+ +|step N | what is done in step N | +| | | +| | Result: what happens | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | expected behavior, or SLA, pass/fail criteria | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/userguide/01-introduction.rst b/docs/userguide/01-introduction.rst deleted file mode 100755 index 0e0eea002..000000000 --- a/docs/userguide/01-introduction.rst +++ /dev/null @@ -1,79 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Ericsson AB and others. - -============ -Introduction -============ - -**Welcome to Yardstick's documentation !** - -.. _Pharos: https://wiki.opnfv.org/pharos -.. _Yardstick: https://wiki.opnfv.org/yardstick -.. _Presentation: https://wiki.opnfv.org/download/attachments/2925202/opnfv_summit_-_yardstick_project.pdf?version=1&modificationDate=1458848320000&api=v2 -Yardstick_ is an OPNFV Project. - -The project's goal is to verify infrastructure compliance, from the perspective -of a Virtual Network Function (:term:`VNF`). - -The Project's scope is the development of a test framework, *Yardstick*, test -cases and test stimuli to enable Network Function Virtualization Infrastructure -(:term:`NFVI`) verification. -The Project also includes a sample :term:`VNF`, the Virtual Traffic Classifier -(:term:`VTC`) and its experimental framework, *ApexLake* ! - -*Yardstick* is used in OPNFV for verifying the OPNFV infrastructure and some of -the OPNFV features. The *Yardstick* framework is deployed in several OPNFV -community labs. It is *installer*, *infrastructure* and *application* -independent. - -.. seealso:: Pharos_ for information on OPNFV community labs and this - Presentation_ for an overview of *Yardstick* - - -About This Document -=================== - -This document consists of the following chapters: - -* Chapter :doc:`02-methodology` describes the methodology implemented by the - Yardstick Project for :term:`NFVI` verification. - -* Chapter :doc:`03-architecture` provides information on the software architecture - of yardstick. - -* Chapter :doc:`04-vtc-overview` provides information on the :term:`VTC`. - -* Chapter :doc:`05-apexlake_installation` provides instructions to install the - experimental framework *ApexLake* - -* Chapter :doc:`06-apexlake_api` explains how this framework is integrated in - *Yardstick*. - -* Chapter :doc:`07-nsb-overview` describes the methodology implemented by the - yardstick - Network service benchmarking to test real world usecase for a - given VNF - -* Chapter :doc:`08-nsb_installation` provides instructions to install - *Yardstick - Network service benchmarking testing*. - -* Chapter :doc:`09-installation` provides instructions to install *Yardstick*. - -* Chapter :doc:`10-yardstick_plugin` provides information on how to integrate - other OPNFV testing projects into *Yardstick*. - -* Chapter :doc:`11-result-store-InfluxDB` provides inforamtion on how to run - plug-in test cases and store test results into community's InfluxDB. - -* Chapter :doc:`12-list-of-tcs` includes a list of available Yardstick test - cases. - - -Contact Yardstick -================= - -Feedback? `Contact us`_ - -.. _Contact us: opnfv-users@lists.opnfv.org - diff --git a/docs/userguide/02-methodology.rst b/docs/userguide/02-methodology.rst deleted file mode 100644 index 34d271095..000000000 --- a/docs/userguide/02-methodology.rst +++ /dev/null @@ -1,195 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Ericsson AB and others. - -=========== -Methodology -=========== - -Abstract -======== - -This chapter describes the methodology implemented by the Yardstick project for -verifying the :term:`NFVI` from the perspective of a :term:`VNF`. - -ETSI-NFV -======== - -.. _NFV-TST001: http://www.etsi.org/deliver/etsi_gs/NFV-TST/001_099/001/01.01.01_60/gs_NFV-TST001v010101p.pdf -.. _Yardsticktst: https://wiki.opnfv.org/download/attachments/2925202/opnfv_summit_-_bridging_opnfv_and_etsi.pdf?version=1&modificationDate=1458848320000&api=v2 - - -The document ETSI GS NFV-TST001_, "Pre-deployment Testing; Report on Validation -of NFV Environments and Services", recommends methods for pre-deployment -testing of the functional components of an NFV environment. - -The Yardstick project implements the methodology described in chapter 6, "Pre- -deployment validation of NFV infrastructure". - -The methodology consists in decomposing the typical :term:`VNF` work-load -performance metrics into a number of characteristics/performance vectors, which -each can be represented by distinct test-cases. - -The methodology includes five steps: - -* *Step1:* Define Infrastruture - the Hardware, Software and corresponding - configuration target for validation; the OPNFV infrastructure, in OPNFV - community labs. - -* *Step2:* Identify :term:`VNF` type - the application for which the - infrastructure is to be validated, and its requirements on the underlying - infrastructure. - -* *Step3:* Select test cases - depending on the workload that represents the - application for which the infrastruture is to be validated, the relevant - test cases amongst the list of available Yardstick test cases. - -* *Step4:* Execute tests - define the duration and number of iterations for the - selected test cases, tests runs are automated via OPNFV Jenkins Jobs. - -* *Step5:* Collect results - using the common API for result collection. - -.. seealso:: Yardsticktst_ for material on alignment ETSI TST001 and Yardstick. - -Metrics -======= - -The metrics, as defined by ETSI GS NFV-TST001, are shown in -:ref:`Table1 `, :ref:`Table2 ` and -:ref:`Table3 `. - -In OPNFV Colorado release, generic test cases covering aspects of the listed -metrics are available; further OPNFV releases will provide extended testing of -these metrics. -The view of available Yardstick test cases cross ETSI definitions in -:ref:`Table1 `, :ref:`Table2 ` and :ref:`Table3 ` -is shown in :ref:`Table4 `. -It shall be noticed that the Yardstick test cases are examples, the test -duration and number of iterations are configurable, as are the System Under -Test (SUT) and the attributes (or, in Yardstick nomemclature, the scenario -options). - -.. _table2_1: - -**Table 1 - Performance/Speed Metrics** - -+---------+-------------------------------------------------------------------+ -| Category| Performance/Speed | -| | | -+---------+-------------------------------------------------------------------+ -| Compute | * Latency for random memory access | -| | * Latency for cache read/write operations | -| | * Processing speed (instructions per second) | -| | * Throughput for random memory access (bytes per second) | -| | | -+---------+-------------------------------------------------------------------+ -| Network | * Throughput per NFVI node (frames/byte per second) | -| | * Throughput provided to a VM (frames/byte per second) | -| | * Latency per traffic flow | -| | * Latency between VMs | -| | * Latency between NFVI nodes | -| | * Packet delay variation (jitter) between VMs | -| | * Packet delay variation (jitter) between NFVI nodes | -| | | -+---------+-------------------------------------------------------------------+ -| Storage | * Sequential read/write IOPS | -| | * Random read/write IOPS | -| | * Latency for storage read/write operations | -| | * Throughput for storage read/write operations | -| | | -+---------+-------------------------------------------------------------------+ - -.. _table2_2: - -**Table 2 - Capacity/Scale Metrics** - -+---------+-------------------------------------------------------------------+ -| Category| Capacity/Scale | -| | | -+---------+-------------------------------------------------------------------+ -| Compute | * Number of cores and threads- Available memory size | -| | * Cache size | -| | * Processor utilization (max, average, standard deviation) | -| | * Memory utilization (max, average, standard deviation) | -| | * Cache utilization (max, average, standard deviation) | -| | | -+---------+-------------------------------------------------------------------+ -| Network | * Number of connections | -| | * Number of frames sent/received | -| | * Maximum throughput between VMs (frames/byte per second) | -| | * Maximum throughput between NFVI nodes (frames/byte per second) | -| | * Network utilization (max, average, standard deviation) | -| | * Number of traffic flows | -| | | -+---------+-------------------------------------------------------------------+ -| Storage | * Storage/Disk size | -| | * Capacity allocation (block-based, object-based) | -| | * Block size | -| | * Maximum sequential read/write IOPS | -| | * Maximum random read/write IOPS | -| | * Disk utilization (max, average, standard deviation) | -| | | -+---------+-------------------------------------------------------------------+ - -.. _table2_3: - -**Table 3 - Availability/Reliability Metrics** - -+---------+-------------------------------------------------------------------+ -| Category| Availability/Reliability | -| | | -+---------+-------------------------------------------------------------------+ -| Compute | * Processor availability (Error free processing time) | -| | * Memory availability (Error free memory time) | -| | * Processor mean-time-to-failure | -| | * Memory mean-time-to-failure | -| | * Number of processing faults per second | -| | | -+---------+-------------------------------------------------------------------+ -| Network | * NIC availability (Error free connection time) | -| | * Link availability (Error free transmission time) | -| | * NIC mean-time-to-failure | -| | * Network timeout duration due to link failure | -| | * Frame loss rate | -| | | -+---------+-------------------------------------------------------------------+ -| Storage | * Disk availability (Error free disk access time) | -| | * Disk mean-time-to-failure | -| | * Number of failed storage read/write operations per second | -| | | -+---------+-------------------------------------------------------------------+ - -.. _table2_4: - -**Table 4 - Yardstick Generic Test Cases** - -+---------+-------------------+----------------+------------------------------+ -| Category| Performance/Speed | Capacity/Scale | Availability/Reliability | -| | | | | -+---------+-------------------+----------------+------------------------------+ -| Compute | TC003 [1]_ | TC003 [1]_ | TC013 [1]_ | -| | TC004 | TC004 | TC015 [1]_ | -| | TC010 | TC024 | | -| | TC012 | TC055 | | -| | TC014 | | | -| | TC069 | | | -+---------+-------------------+----------------+------------------------------+ -| Network | TC001 | TC044 | TC016 [1]_ | -| | TC002 | TC073 | TC018 [1]_ | -| | TC009 | TC075 | | -| | TC011 | | | -| | TC042 | | | -| | TC043 | | | -+---------+-------------------+----------------+------------------------------+ -| Storage | TC005 | TC063 | TC017 [1]_ | -+---------+-------------------+----------------+------------------------------+ - -.. note:: The description in this OPNFV document is intended as a reference for - users to understand the scope of the Yardstick Project and the - deliverables of the Yardstick framework. For complete description of - the methodology, please refer to the ETSI document. - -.. rubric:: Footnotes -.. [1] To be included in future deliveries. - diff --git a/docs/userguide/03-architecture.rst b/docs/userguide/03-architecture.rst deleted file mode 100755 index 03bf00f58..000000000 --- a/docs/userguide/03-architecture.rst +++ /dev/null @@ -1,266 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) 2016 Huawei Technologies Co.,Ltd and others - -============ -Architecture -============ - -Abstract -======== -This chapter describes the yardstick framework software architecture. we will introduce it from Use-Case View, -Logical View, Process View and Deployment View. More technical details will be introduced in this chapter. - -Overview -======== - -Architecture overview ---------------------- -Yardstick is mainly written in Python, and test configurations are made -in YAML. Documentation is written in reStructuredText format, i.e. .rst -files. Yardstick is inspired by Rally. Yardstick is intended to run on a -computer with access and credentials to a cloud. The test case is described -in a configuration file given as an argument. - -How it works: the benchmark task configuration file is parsed and converted into -an internal model. The context part of the model is converted into a Heat -template and deployed into a stack. Each scenario is run using a runner, either -serially or in parallel. Each runner runs in its own subprocess executing -commands in a VM using SSH. The output of each scenario is written as json -records to a file or influxdb or http server, we use influxdb as the backend, -the test result will be shown with grafana. - - -Concept -------- -**Benchmark** - assess the relative performance of something - -**Benchmark** configuration file - describes a single test case in yaml format - -**Context** - The set of Cloud resources used by a scenario, such as user -names, image names, affinity rules and network configurations. A context is -converted into a simplified Heat template, which is used to deploy onto the -Openstack environment. - -**Data** - Output produced by running a benchmark, written to a file in json format - -**Runner** - Logic that determines how a test scenario is run and reported, for -example the number of test iterations, input value stepping and test duration. -Predefined runner types exist for re-usage, see `Runner types`_. - -**Scenario** - Type/class of measurement for example Ping, Pktgen, (Iperf, LmBench, ...) - -**SLA** - Relates to what result boundary a test case must meet to pass. For -example a latency limit, amount or ratio of lost packets and so on. Action -based on :term:`SLA` can be configured, either just to log (monitor) or to stop -further testing (assert). The :term:`SLA` criteria is set in the benchmark -configuration file and evaluated by the runner. - - -Runner types ------------- - -There exists several predefined runner types to choose between when designing -a test scenario: - -**Arithmetic:** -Every test run arithmetically steps the specified input value(s) in the -test scenario, adding a value to the previous input value. It is also possible -to combine several input values for the same test case in different -combinations. - -Snippet of an Arithmetic runner configuration: -:: - - - runner: - type: Arithmetic - iterators: - - - name: stride - start: 64 - stop: 128 - step: 64 - -**Duration:** -The test runs for a specific period of time before completed. - -Snippet of a Duration runner configuration: -:: - - - runner: - type: Duration - duration: 30 - -**Sequence:** -The test changes a specified input value to the scenario. The input values -to the sequence are specified in a list in the benchmark configuration file. - -Snippet of a Sequence runner configuration: -:: - - - runner: - type: Sequence - scenario_option_name: packetsize - sequence: - - 100 - - 200 - - 250 - - -**Iteration:** -Tests are run a specified number of times before completed. - -Snippet of an Iteration runner configuration: -:: - - - runner: - type: Iteration - iterations: 2 - - - - -Use-Case View -============= -Yardstick Use-Case View shows two kinds of users. One is the Tester who will -do testing in cloud, the other is the User who is more concerned with test result -and result analyses. - -For testers, they will run a single test case or test case suite to verify -infrastructure compliance or bencnmark their own infrastructure performance. -Test result will be stored by dispatcher module, three kinds of store method -(file, influxdb and http) can be configured. The detail information of -scenarios and runners can be queried with CLI by testers. - -For users, they would check test result with four ways. - -If dispatcher module is configured as file(default), there are two ways to -check test result. One is to get result from yardstick.out ( default path: -/tmp/yardstick.out), the other is to get plot of test result, it will be shown -if users execute command "yardstick-plot". - -If dispatcher module is configured as influxdb, users will check test -result on Grafana which is most commonly used for visualizing time series data. - -If dispatcher module is configured as http, users will check test result -on OPNFV testing dashboard which use MongoDB as backend. - -.. image:: images/Use_case.png - :width: 800px - :alt: Yardstick Use-Case View - -Logical View -============ -Yardstick Logical View describes the most important classes, their -organization, and the most important use-case realizations. - -Main classes: - -**TaskCommands** - "yardstick task" subcommand handler. - -**HeatContext** - Do test yaml file context section model convert to HOT, -deploy and undeploy Openstack heat stack. - -**Runner** - Logic that determines how a test scenario is run and reported. - -**TestScenario** - Type/class of measurement for example Ping, Pktgen, (Iperf, -LmBench, ...) - -**Dispatcher** - Choose user defined way to store test results. - -TaskCommands is the "yardstick task" subcommand's main entry. It takes yaml -file (e.g. test.yaml) as input, and uses HeatContext to convert the yaml -file's context section to HOT. After Openstack heat stack is deployed by -HeatContext with the converted HOT, TaskCommands use Runner to run specified -TestScenario. During first runner initialization, it will create output -process. The output process use Dispatcher to push test results. The Runner -will also create a process to execute TestScenario. And there is a -multiprocessing queue between each runner process and output process, so the -runner process can push the real-time test results to the storage media. -TestScenario is commonly connected with VMs by using ssh. It sets up VMs and -run test measurement scripts through the ssh tunnel. After all TestScenaio -is finished, TaskCommands will undeploy the heat stack. Then the whole test is -finished. - -.. image:: images/Logical_view.png - :width: 800px - :alt: Yardstick Logical View - -Process View (Test execution flow) -================================== -Yardstick process view shows how yardstick runs a test case. Below is the -sequence graph about the test execution flow using heat context, and each -object represents one module in yardstick: - -.. image:: images/test_execution_flow.png - :width: 800px - :alt: Yardstick Process View - -A user wants to do a test with yardstick. He can use the CLI to input the -command to start a task. "TaskCommands" will receive the command and ask -"HeatContext" to parse the context. "HeatContext" will then ask "Model" to -convert the model. After the model is generated, "HeatContext" will inform -"Openstack" to deploy the heat stack by heat template. After "Openstack" -deploys the stack, "HeatContext" will inform "Runner" to run the specific test -case. - -Firstly, "Runner" would ask "TestScenario" to process the specific scenario. -Then "TestScenario" will start to log on the openstack by ssh protocal and -execute the test case on the specified VMs. After the script execution -finishes, "TestScenario" will send a message to inform "Runner". When the -testing job is done, "Runner" will inform "Dispatcher" to output the test -result via file, influxdb or http. After the result is output, "HeatContext" -will call "Openstack" to undeploy the heat stack. Once the stack is -undepoyed, the whole test ends. - -Deployment View -=============== -Yardstick deployment view shows how the yardstick tool can be deployed into the -underlying platform. Generally, yardstick tool is installed on JumpServer(see -`07-installation` for detail installation steps), and JumpServer is -connected with other control/compute servers by networking. Based on this -deployment, yardstick can run the test cases on these hosts, and get the test -result for better showing. - -.. image:: images/Deployment.png - :width: 800px - :alt: Yardstick Deployment View - -Yardstick Directory structure -============================= - -**yardstick/** - Yardstick main directory. - -*ci/* - Used for continuous integration of Yardstick at different PODs and - with support for different installers. - -*docs/* - All documentation is stored here, such as configuration guides, - user guides and Yardstick descriptions. - -*etc/* - Used for test cases requiring specific POD configurations. - -*samples/* - test case samples are stored here, most of all scenario and - feature's samples are shown in this directory. - -*tests/* - Here both Yardstick internal tests (*functional/* and *unit/*) as - well as the test cases run to verify the NFVI (*opnfv/*) are stored. - Also configurations of what to run daily and weekly at the different - PODs is located here. - -*tools/* - Currently contains tools to build image for VMs which are deployed - by Heat. Currently contains how to build the yardstick-trusty-server - image with the different tools that are needed from within the image. - -*plugin/* - Plug-in configuration files are stored here. - -*vTC/* - Contains the files for running the virtual Traffic Classifier tests. - -*yardstick/* - Contains the internals of Yardstick: Runners, Scenario, Contexts, - CLI parsing, keys, plotting tools, dispatcher, plugin - install/remove scripts and so on. - diff --git a/docs/userguide/04-vtc-overview.rst b/docs/userguide/04-vtc-overview.rst deleted file mode 100644 index 82b20cad5..000000000 --- a/docs/userguide/04-vtc-overview.rst +++ /dev/null @@ -1,122 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, National Center of Scientific Research "Demokritos" and others. - -========================== -Virtual Traffic Classifier -========================== - -Abstract -======== - -.. _TNOVA: http://www.t-nova.eu/ -.. _TNOVAresults: http://www.t-nova.eu/results/ -.. _Yardstick: https://wiki.opnfv.org/yardstick - -This chapter provides an overview of the virtual Traffic Classifier, a -contribution to OPNFV Yardstick_ from the EU Project TNOVA_. -Additional documentation is available in TNOVAresults_. - -Overview -======== - -The virtual Traffic Classifier (:term:`VTC`) :term:`VNF`, comprises of a -Virtual Network Function Component (:term:`VNFC`). The :term:`VNFC` contains -both the Traffic Inspection module, and the Traffic forwarding module, needed -to run the :term:`VNF`. The exploitation of Deep Packet Inspection -(:term:`DPI`) methods for traffic classification is built around two basic -assumptions: - -* third parties unaffiliated with either source or recipient are able to -inspect each IP packet’s payload - -* the classifier knows the relevant syntax of each application’s packet -payloads (protocol signatures, data patterns, etc.). - -The proposed :term:`DPI` based approach will only use an indicative, small -number of the initial packets from each flow in order to identify the content -and not inspect each packet. - -In this respect it follows the Packet Based per Flow State (term:`PBFS`). This -method uses a table to track each session based on the 5-tuples (src address, -dest address, src port,dest port, transport protocol) that is maintained for -each flow. - -Concepts -======== - -* *Traffic Inspection*: The process of packet analysis and application -identification of network traffic that passes through the :term:`VTC`. - -* *Traffic Forwarding*: The process of packet forwarding from an incoming -network interface to a pre-defined outgoing network interface. - -* *Traffic Rule Application*: The process of packet tagging, based on a -predefined set of rules. Packet tagging may include e.g. Type of Service -(:term:`ToS`) field modification. - -Architecture -============ - -The Traffic Inspection module is the most computationally intensive component -of the :term:`VNF`. It implements filtering and packet matching algorithms in -order to support the enhanced traffic forwarding capability of the :term:`VNF`. -The component supports a flow table (exploiting hashing algorithms for fast -indexing of flows) and an inspection engine for traffic classification. - -The implementation used for these experiments exploits the nDPI library. -The packet capturing mechanism is implemented using libpcap. When the -:term:`DPI` engine identifies a new flow, the flow register is updated with the -appropriate information and transmitted across the Traffic Forwarding module, -which then applies any required policy updates. - -The Traffic Forwarding moudle is responsible for routing and packet forwarding. -It accepts incoming network traffic, consults the flow table for classification -information for each incoming flow and then applies pre-defined policies -marking e.g. :term:`ToS`/Differentiated Services Code Point (:term:`DSCP`) -multimedia traffic for Quality of Service (:term:`QoS`) enablement on the -forwarded traffic. -It is assumed that the traffic is forwarded using the default policy until it -is identified and new policies are enforced. - -The expected response delay is considered to be negligible, as only a small -number of packets are required to identify each flow. - -Graphical Overview -================== - -.. code-block:: console - - +----------------------------+ - | | - | Virtual Traffic Classifier | - | | - | Analysing/Forwarding | - | ------------> | - | ethA ethB | - | | - +----------------------------+ - | ^ - | | - v | - +----------------------------+ - | | - | Virtual Switch | - | | - +----------------------------+ - -Install -======= - -run the build.sh with root privileges - -Run -=== - -sudo ./pfbridge -a eth1 -b eth2 - -Development Environment -======================= - -Ubuntu 14.04 diff --git a/docs/userguide/05-apexlake_installation.rst b/docs/userguide/05-apexlake_installation.rst deleted file mode 100644 index d4493e0f8..000000000 --- a/docs/userguide/05-apexlake_installation.rst +++ /dev/null @@ -1,300 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Intel Corporation and others. - - -.. _DPDK: http://dpdk.org/doc/nics -.. _DPDK-pktgen: https://github.com/Pktgen/Pktgen-DPDK/ -.. _SRIOV: https://wiki.openstack.org/wiki/SR-IOV-Passthrough-For-Networking -.. _PORTSEC: https://wiki.openstack.org/wiki/Neutron/ML2PortSecurityExtensionDriver -.. _here: https://wiki.opnfv.org/vtc - - -============================ -Apexlake Installation Guide -============================ - -Abstract --------- - -ApexLake is a framework that provides automatic execution of experiments and -related data collection to enable a user validate infrastructure from the -perspective of a Virtual Network Function (:term:`VNF`). - -In the context of Yardstick, a virtual Traffic Classifier (:term:`VTC`) network -function is utilized. - - -Framework Hardware Dependencies -=============================== - -In order to run the framework there are some hardware related dependencies for -ApexLake. - -The framework needs to be installed on the same physical node where DPDK-pktgen_ -is installed. - -The installation requires the physical node hosting the packet generator must -have 2 NICs which are DPDK_ compatible. - -The 2 NICs will be connected to the switch where the OpenStack VM -network is managed. - -The switch used must support multicast traffic and :term:`IGMP` snooping. -Further details about the configuration are provided at the following here_. - -The corresponding ports to which the cables are connected need to be configured -as VLAN trunks using two of the VLAN IDs available for Neutron. -Note the VLAN IDs used as they will be required in later configuration steps. - - -Framework Software Dependencies -=============================== -Before starting the framework, a number of dependencies must first be installed. -The following describes the set of instructions to be executed via the Linux -shell in order to install and configure the required dependencies. - -1. Install Dependencies. - -To support the framework dependencies the following packages must be installed. -The example provided is based on Ubuntu and needs to be executed in root mode. - -:: - - apt-get install python-dev - apt-get install python-pip - apt-get install python-mock - apt-get install tcpreplay - apt-get install libpcap-dev - -2. Source OpenStack openrc file. - -:: - - source openrc - -3. Configure Openstack Neutron - -In order to support traffic generation and management by the virtual -Traffic Classifier, the configuration of the port security driver -extension is required for Neutron. - -For further details please follow the following link: PORTSEC_ -This step can be skipped in case the target OpenStack is Juno or Kilo release, -but it is required to support Liberty. -It is therefore required to indicate the release version in the configuration -file located in ./yardstick/vTC/apexlake/apexlake.conf - - -4. Create Two Networks based on VLANs in Neutron. - -To enable network communications between the packet generator and the compute -node, two networks must be created via Neutron and mapped to the VLAN IDs -that were previously used in the configuration of the physical switch. -The following shows the typical set of commands required to configure Neutron -correctly. -The physical switches need to be configured accordingly. - -:: - - VLAN_1=2032 - VLAN_2=2033 - PHYSNET=physnet2 - neutron net-create apexlake_inbound_network \ - --provider:network_type vlan \ - --provider:segmentation_id $VLAN_1 \ - --provider:physical_network $PHYSNET - - neutron subnet-create apexlake_inbound_network \ - 192.168.0.0/24 --name apexlake_inbound_subnet - - neutron net-create apexlake_outbound_network \ - --provider:network_type vlan \ - --provider:segmentation_id $VLAN_2 \ - --provider:physical_network $PHYSNET - - neutron subnet-create apexlake_outbound_network 192.168.1.0/24 \ - --name apexlake_outbound_subnet - - -5. Download Ubuntu Cloud Image and load it on Glance - -The virtual Traffic Classifier is supported on top of Ubuntu 14.04 cloud image. -The image can be downloaded on the local machine and loaded on Glance -using the following commands: - -:: - - wget cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img - glance image-create \ - --name ubuntu1404 \ - --is-public true \ - --disk-format qcow \ - --container-format bare \ - --file trusty-server-cloudimg-amd64-disk1.img - - - -6. Configure the Test Cases - -The VLAN tags must also be included in the test case Yardstick yaml file -as parameters for the following test cases: - - * :doc:`opnfv_yardstick_tc006` - - * :doc:`opnfv_yardstick_tc007` - - * :doc:`opnfv_yardstick_tc020` - - * :doc:`opnfv_yardstick_tc021` - - -Install and Configure DPDK Pktgen -+++++++++++++++++++++++++++++++++ - -Execution of the framework is based on DPDK Pktgen. -If DPDK Pktgen has not installed, it is necessary to download, install, compile -and configure it. -The user can create a directory and download the dpdk packet generator source -code: - -:: - - cd experimental_framework/libraries - mkdir dpdk_pktgen - git clone https://github.com/pktgen/Pktgen-DPDK.git - -For instructions on the installation and configuration of DPDK and DPDK Pktgen -please follow the official DPDK Pktgen README file. -Once the installation is completed, it is necessary to load the DPDK kernel -driver, as follow: - -:: - - insmod uio - insmod DPDK_DIR/x86_64-native-linuxapp-gcc/kmod/igb_uio.ko - -It is necessary to set the configuration file to support the desired Pktgen -configuration. -A description of the required configuration parameters and supporting examples -is provided in the following: - -:: - - [PacketGen] - packet_generator = dpdk_pktgen - - # This is the directory where the packet generator is installed - # (if the user previously installed dpdk-pktgen, - # it is required to provide the director where it is installed). - pktgen_directory = /home/user/software/dpdk_pktgen/dpdk/examples/pktgen/ - - # This is the directory where DPDK is installed - dpdk_directory = /home/user/apexlake/experimental_framework/libraries/Pktgen-DPDK/dpdk/ - - # Name of the dpdk-pktgen program that starts the packet generator - program_name = app/app/x86_64-native-linuxapp-gcc/pktgen - - # DPDK coremask (see DPDK-Pktgen readme) - coremask = 1f - - # DPDK memory channels (see DPDK-Pktgen readme) - memory_channels = 3 - - # Name of the interface of the pktgen to be used to send traffic (vlan_sender) - name_if_1 = p1p1 - - # Name of the interface of the pktgen to be used to receive traffic (vlan_receiver) - name_if_2 = p1p2 - - # PCI bus address correspondent to if_1 - bus_slot_nic_1 = 01:00.0 - - # PCI bus address correspondent to if_2 - bus_slot_nic_2 = 01:00.1 - - -To find the parameters related to names of the NICs and the addresses of the PCI buses -the user may find it useful to run the :term:`DPDK` tool nic_bind as follows: - -:: - - DPDK_DIR/tools/dpdk_nic_bind.py --status - -Lists the NICs available on the system, and shows the available drivers and bus addresses for each interface. -Please make sure to select NICs which are :term:`DPDK` compatible. - -Installation and Configuration of smcroute -++++++++++++++++++++++++++++++++++++++++++ - -The user is required to install smcroute which is used by the framework to -support multicast communications. - -The following is the list of commands required to download and install smroute. - -:: - - cd ~ - git clone https://github.com/troglobit/smcroute.git - cd smcroute - git reset --hard c3f5c56 - sed -i 's/aclocal-1.11/aclocal/g' ./autogen.sh - sed -i 's/automake-1.11/automake/g' ./autogen.sh - ./autogen.sh - ./configure - make - sudo make install - cd .. - -It is required to do the reset to the specified commit ID. -It is also requires the creation a configuration file using the following -command: - - SMCROUTE_NIC=(name of the nic) - -where name of the nic is the name used previously for the variable "name_if_2". -For example: - -:: - - SMCROUTE_NIC=p1p2 - -Then create the smcroute configuration file /etc/smcroute.conf - -:: - - echo mgroup from $SMCROUTE_NIC group 224.192.16.1 > /etc/smcroute.conf - - -At the end of this procedure it will be necessary to perform the following -actions to add the user to the sudoers: - -:: - - adduser USERNAME sudo - echo "user ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers - - -Experiment using SR-IOV Configuration on the Compute Node -+++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - -To enable :term:`SR-IOV` interfaces on the physical NIC of the compute node, a -compatible NIC is required. -NIC configuration depends on model and vendor. After proper configuration to -support :term:`SR-IOV`, a proper configuration of OpenStack is required. -For further information, please refer to the SRIOV_ configuration guide - -Finalize installation the framework on the system -================================================= - -The installation of the framework on the system requires the setup of the project. -After entering into the apexlake directory, it is sufficient to run the following -command. - -:: - - python setup.py install - -Since some elements are copied into the /tmp directory (see configuration file) -it could be necessary to repeat this step after a reboot of the host. diff --git a/docs/userguide/06-apexlake_api.rst b/docs/userguide/06-apexlake_api.rst deleted file mode 100644 index 35a1dbe3e..000000000 --- a/docs/userguide/06-apexlake_api.rst +++ /dev/null @@ -1,89 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Intel Corporation and others. - - -================================= -Apexlake API Interface Definition -================================= - -Abstract --------- - -The API interface provided by the framework to enable the execution of test -cases is defined as follows. - - -init ----- - -**static init()** - - Initializes the Framework - - **Returns** None - - -execute_framework ------------------ - -**static execute_framework** (test_cases, - - iterations, - - heat_template, - - heat_template_parameters, - - deployment_configuration, - - openstack_credentials) - - Executes the framework according the specified inputs - - **Parameters** - - - **test_cases** - - Test cases to be run with the workload (dict() of dict()) - - Example: - test_case = dict() - - test_case[’name’] = ‘module.Class’ - - test_case[’params’] = dict() - - test_case[’params’][’throughput’] = ‘1’ - - test_case[’params’][’vlan_sender’] = ‘1000’ - - test_case[’params’][’vlan_receiver’] = ‘1001’ - - test_cases = [test_case] - - - **iterations** - Number of test cycles to be executed (int) - - - **heat_template** - (string) File name of the heat template corresponding to the workload to be deployed. - It contains the parameters to be evaluated in the form of #parameter_name. - (See heat_templates/vTC.yaml as example). - - - **heat_template_parameters** - (dict) Parameters to be provided as input to the - heat template. See http://docs.openstack.org/developer/heat/ template_guide/hot_guide.html - section “Template input parameters” for further info. - - - **deployment_configuration** - ( dict[string] = list(strings) ) ) Dictionary of parameters - representing the deployment configuration of the workload. - - The key is a string corresponding to the name of the parameter, - the value is a list of strings representing the value to be - assumed by a specific param. The parameters are user defined: - they have to correspond to the place holders (#parameter_name) - specified in the heat template. - - **Returns** dict() containing results diff --git a/docs/userguide/07-nsb-overview.rst b/docs/userguide/07-nsb-overview.rst deleted file mode 100644 index 19719f1a7..000000000 --- a/docs/userguide/07-nsb-overview.rst +++ /dev/null @@ -1,177 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, 2016-2017 Intel Corporation. - -===================================== -Network Services Benchmarking (NSB) -===================================== - -Abstract -======== - -.. _Yardstick: https://wiki.opnfv.org/yardstick - -This chapter provides an overview of the NSB, a contribution to OPNFV -Yardstick_ from Intel. - -Overview -======== - -GOAL: Extend Yardstick to perform real world VNFs and NFVi Characterization and -benchmarking with repeatable and deterministic methods. - -The Network Service Benchmarking (NSB) extends the yardstick framework to do -VNF characterization and benchmarking in three different execution -environments viz., bare metal i.e. native Linux environment, standalone virtual -environment and managed virtualized environment (e.g. Open stack etc.). -It also brings in the capability to interact with external traffic generators -both hardware & software based for triggering and validating the traffic -according to user defined profiles. - -NSB extension includes: - • Generic data models of Network Services, based on ETSI specs - • New Standalone context for VNF testing like SRIOV, OVS, OVS-DPDK etc - • Generic VNF configuration models and metrics implemented with Python - classes - • Traffic generator features and traffic profiles - • L1-L3 state-less traffic profiles - • L4-L7 state-full traffic profiles - • Tunneling protocol / network overlay support - • Test case samples - • Ping - • Trex - • vPE,vCGNAT, vFirewall etc - ipv4 throughput, latency etc - • Traffic generators like Trex, ab/nginx, ixia, iperf etc - • KPIs for a given use case: - • System agent support for collecting NFvi KPI. This includes: - o CPU statistic - o Memory BW - o OVS-DPDK Stats - • Network KPIs – eg, inpackets, outpackets, thoughput, latency etc - • VNF KPIs – packet_in, packet_drop, packet_fwd etc - -Architecture -============ -The Network Service (NS) defines a set of Virtual Network Functions (VNF) -connected together using NFV infrastructure. - -The Yardstick NSB extension can support multiple VNFs created by different -vendors including traffic generators. Every VNF being tested has its -own data model. The Network service defines a VNF modelling on base of performed -network functionality. The part of the data model is a set of the configuration -parameters, number of connection points used and flavor including core and -memory amount. - -The ETSI defines a Network Service as a set of configurable VNFs working in -some NFV Infrastructure connecting each other using Virtual Links available -through Connection Points. The ETSI MANO specification defines a set of -management entities called Network Service Descriptors (NSD) and -VNF Descriptors (VNFD) that define real Network Service. The picture below -makes an example how the real Network Operator use-case can map into ETSI -Network service definition - -Network Service framework performs the necessary test steps. It may involve - o Interacting with traffic generator and providing the inputs on traffic - type / packet structure to generate the required traffic as per the - test case. Traffic profiles will be used for this. - o Executing the commands required for the test procedure and analyses the - command output for confirming whether the command got executed correctly - or not. E.g. As per the test case, run the traffic for the given - time period / wait for the necessary time delay - o Verify the test result. - o Validate the traffic flow from SUT - o Fetch the table / data from SUT and verify the value as per the test case - o Upload the logs from SUT onto the Test Harness server - o Read the KPI’s provided by particular VNF - -Components of Network Service ------------------------------- - -* *Models for Network Service benchmarking*: The Network Service benchmarking - requires the proper modelling approach. The NSB provides models using Python - files and defining of NSDs and VNFDs. - -The benchmark control application being a part of OPNFV yardstick can call -that python models to instantiate and configure the VNFs. Depending on -infrastructure type (bare-metal or fully virtualized) that calls could be -made directly or using MANO system. - -* *Traffic generators in NSB*: Any benchmark application requires a set of - traffic generator and traffic profiles defining the method in which traffic - is generated. - -The Network Service benchmarking model extends the Network Service -definition with a set of Traffic Generators (TG) that are treated -same way as other VNFs being a part of benchmarked network service. -Same as other VNFs the traffic generator are instantiated and terminated. - -Every traffic generator has own configuration defined as a traffic profile and -a set of KPIs supported. The python models for TG is extended by specific calls -to listen and generate traffic. - -* *The stateless TREX traffic generator*: The main traffic generator used as - Network Service stimulus is open source TREX tool. - -The TREX tool can generate any kind of stateless traffic. - -.. code-block:: console - - +--------+ +-------+ +--------+ - | | | | | | - | Trex | ---> | VNF | ---> | Trex | - | | | | | | - +--------+ +-------+ +--------+ - -Supported testcases scenarios: -• Correlated UDP traffic using TREX traffic generator and replay VNF. - o using different IMIX configuration like pure voice, pure video traffic etc - o using different number IP flows like 1 flow, 1K, 16K, 64K, 256K, 1M flows - o Using different number of rules configured like 1 rule, 1K, 10K rules - -For UDP correlated traffic following Key Performance Indicators are collected -for every combination of test case parameters: - • RFC2544 throughput for various loss rate defined (1% is a default) - -Graphical Overview -================== - -NSB Testing with yardstick framework facilitate performance testing of various -VNFs provided. - -.. code-block:: console - +-----------+ - | | +-----------+ - | vPE | ->|TGen Port 0| - | TestCase | | +-----------+ - | | | - +-----------+ +------------------+ +-------+ | - | | -- API --> | VNF | <---> - +-----------+ | Yardstick | +-------+ | - | Test Case | --> | NSB Testing | | - +-----------+ | | | - | | | | - | +------------------+ | - +-----------+ | +-----------+ - | Traffic | ->|TGen Port 1| - | patterns | +-----------+ - +-----------+ - Figure 1: Network Service - 2 server configuration - - -Install -======= - -run the nsb_install.sh with root privileges - -Run -=== - -source ~/.bash_profile -cd /yardstick/cmd -sudo -E ./NSBperf.py --vnf vpe --test tc_baremetal_rfc2544_ipv4_1flow_64B.yaml - -Development Environment -======================= - -Ubuntu 14.04, Ubuntu 16.04 diff --git a/docs/userguide/08-nsb_installation.rst b/docs/userguide/08-nsb_installation.rst deleted file mode 100644 index a390bb7d7..000000000 --- a/docs/userguide/08-nsb_installation.rst +++ /dev/null @@ -1,253 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, 2016-2017 Intel Corporation. - -Yardstick - NSB Testing -Installation -===================================== - -Abstract --------- - -Yardstick supports installation on Ubuntu 14.04 or via a Docker image. The -installation procedure on Ubuntu 14.04 or via the docker image are detailed in -the section below. - -The Network Service Benchmarking (NSB) extends the yardstick framework to do -VNF characterization and benchmarking in three different execution -environments viz., bare metal i.e. native Linux environment, standalone virtual -environment and managed virtualized environment (e.g. Open stack etc.). -It also brings in the capability to interact with external traffic generators -both hardware & software based for triggering and validating the traffic -according to user defined profiles. - -The steps needed to run Yardstick with NSB testing are: - -* Install Yardstick (NSB Testing). -* Setup pod.yaml describing Test topology -* Create the test configuration yaml file. -* Run the test case. - - -Prerequisites -------------- - -Refer chapter 08-instalaltion.rst for more information on yardstick -prerequisites - -Several prerequisites are needed for Yardstick(VNF testing): -* Python Modules: pyzmq, pika. -* flex -* bison -* build-essential -* automake -* libtool -* librabbitmq-dev -* rabbitmq-server -* collectd -* intel-cmt-cat - -Installing Yardstick on Ubuntu 14.04 ------------------------------------- - -.. _install-framework: - -You can install Yardstick framework directly on Ubuntu 14.04 or in an Ubuntu -14.04 Docker image. No matter which way you choose to install Yardstick -framework, the following installation steps are identical. - -If you choose to use the Ubuntu 14.04 Docker image, You can pull the Ubuntu -14.04 Docker image from Docker hub: - -:: - - docker pull ubuntu:14.04 - -Installing Yardstick framework -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Download source code and install python dependencies: - -:: - - git clone https://gerrit.opnfv.org/gerrit/yardstick - cd yardstick - ./nsb_setup.sh - -It will automatically download all the packages needed for NSB Testing setup. - -System Topology: ------------------ - -.. code-block:: console - - +----------+ +----------+ - | | | | - | | (0)----->(0) | Ping/ | - | TG1 | | vPE/ | - | | | 2Trex | - | | (1)<-----(1) | | - +----------+ +----------+ - trafficgen_1 vnf - - -OpenStack parameters and credentials ------------------------------------- - -Environment variables -^^^^^^^^^^^^^^^^^^^^^ -Before running Yardstick (NSB Testing) it is necessary to export traffic -generator libraries. - -:: - source ~/.bash_profile - -Config yardstick conf -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -cp ./etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf - -vi /etc/yardstick/yardstick.conf - -Config yardstick.conf -:: - - [DEFAULT] - debug = True - dispatcher = influxdb - - [dispatcher_influxdb] - timeout = 5 - target = http://{YOUR_IP_HERE}:8086 - db_name = yardstick - username = root - password = root - - [nsb] - trex_path=/opt/nsb_bin/trex/scripts - bin_path=/opt/nsb_bin - - -Config pod.yaml describing Topology -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Before executing Yardstick test cases, make sure that pod.yaml reflects the -topology and update all the required fields. - -copy /etc/yardstick/nodes/pod.yaml.nsb.example to /etc/yardstick/nodes/pod.yaml - -Config pod.yaml -:: - nodes: - - - name: trafficgen_1 - role: TrafficGen - ip: 1.1.1.1 - user: root - password: r00t - interfaces: - xe0: # logical name from topology.yaml and vnfd.yaml - vpci: "0000:07:00.0" - driver: i40e # default kernel driver - dpdk_port_num: 0 - local_ip: "152.16.100.20" - netmask: "255.255.255.0" - local_mac: "00:00:00:00:00:01" - xe1: # logical name from topology.yaml and vnfd.yaml - vpci: "0000:07:00.1" - driver: i40e # default kernel driver - dpdk_port_num: 1 - local_ip: "152.16.40.20" - netmask: "255.255.255.0" - local_mac: "00:00.00:00:00:02" - - - - name: vnf - role: vnf - ip: 1.1.1.2 - user: root - password: r00t - host: 1.1.1.2 #BM - host == ip, virtualized env - Host - compute node - interfaces: - xe0: # logical name from topology.yaml and vnfd.yaml - vpci: "0000:07:00.0" - driver: i40e # default kernel driver - dpdk_port_num: 0 - local_ip: "152.16.100.19" - netmask: "255.255.255.0" - local_mac: "00:00:00:00:00:03" - - xe1: # logical name from topology.yaml and vnfd.yaml - vpci: "0000:07:00.1" - driver: i40e # default kernel driver - dpdk_port_num: 1 - local_ip: "152.16.40.19" - netmask: "255.255.255.0" - local_mac: "00:00:00:00:00:04" - routing_table: - - network: "152.16.100.20" - netmask: "255.255.255.0" - gateway: "152.16.100.20" - if: "xe0" - - network: "152.16.40.20" - netmask: "255.255.255.0" - gateway: "152.16.40.20" - if: "xe1" - nd_route_tbl: - - network: "0064:ff9b:0:0:0:0:9810:6414" - netmask: "112" - gateway: "0064:ff9b:0:0:0:0:9810:6414" - if: "xe0" - - network: "0064:ff9b:0:0:0:0:9810:2814" - netmask: "112" - gateway: "0064:ff9b:0:0:0:0:9810:2814" - if: "xe1" - -Enable yardstick virtual environment -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Before executing yardstick test cases, make sure to activate yardstick -python virtual environment - -:: - source /opt/nsb_bin/yardstick_venv/bin/activate - - -Examples and verifying the install ----------------------------------- - -It is recommended to verify that Yardstick was installed successfully -by executing some simple commands and test samples. Before executing yardstick -test cases make sure yardstick flavor and building yardstick-trusty-server -image can be found in glance and openrc file is sourced. Below is an example -invocation of yardstick help command and ping.py test sample: -:: - - yardstick –h - yardstick task start samples/ping.yaml - -Each testing tool supported by Yardstick has a sample configuration file. -These configuration files can be found in the **samples** directory. - -Default location for the output is ``/tmp/yardstick.out``. - - -Run Yardstick - Network Service Testcases ------------------------------------------ - -NS testing - using NSBperf CLI -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -:: - - source /opt/nsb_setup/yardstick_venv/bin/activate - PYTHONPATH: ". ~/.bash_profile" - cd /yardstick/cmd - Execute command: ./NSPerf.py -h - ./NSBperf.py --vnf --test - eg: ./NSBperf.py --vnf vpe --test tc_baremetal_rfc2544_ipv4_1flow_64B.yaml - -NS testing - using yardstick CLI -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -:: - - source /opt/nsb_setup/yardstick_venv/bin/activate - PYTHONPATH: ". ~/.bash_profile" - Go to test case forlder type we want to execute. - e.g. /samples/vnf_samples/nsut// - run: yardstick --debug task start diff --git a/docs/userguide/09-installation.rst b/docs/userguide/09-installation.rst deleted file mode 100644 index 9c2082a27..000000000 --- a/docs/userguide/09-installation.rst +++ /dev/null @@ -1,401 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Ericsson AB, Huawei Technologies Co.,Ltd and others. - -Yardstick Installation -====================== - -Abstract --------- - -Yardstick supports installation on Ubuntu 14.04 or via a Docker image. The -installation procedure on Ubuntu 14.04 or via the docker image are detailed in -the section below. - -To use Yardstick you should have access to an OpenStack environment, with at -least Nova, Neutron, Glance, Keystone and Heat installed. - -The steps needed to run Yardstick are: - -1. Install Yardstick. -2. Load OpenStack environment variables. -3. Create a Neutron external network. -4. Build Yardstick flavor and a guest image. -5. Load the guest image into the OpenStack environment. -6. Create the test configuration .yaml file. -7. Run the test case. - - -Prerequisites -------------- - -The OPNFV deployment is out of the scope of this document but it can be -found in http://artifacts.opnfv.org/opnfvdocs/colorado/docs/configguide/index.html. -The OPNFV platform is considered as the System Under Test (SUT) in this -document. - -Several prerequisites are needed for Yardstick: - - #. A Jumphost to run Yardstick on - #. A Docker daemon shall be installed on the Jumphost - #. A public/external network created on the SUT - #. Connectivity from the Jumphost to the SUT public/external network - -WARNING: Connectivity from Jumphost is essential and it is of paramount -importance to make sure it is working before even considering to install -and run Yardstick. Make also sure you understand how your networking is -designed to work. - -NOTE: **Jumphost** refers to any server which meets the previous -requirements. Normally it is the same server from where the OPNFV -deployment has been triggered previously. - -NOTE: If your Jumphost is operating behind a company http proxy and/or -Firewall, please consult first the section `Proxy Support`_, towards -the end of this document. The section details some tips/tricks which -*may* be of help in a proxified environment. - - -Installing Yardstick on Ubuntu 14.04 ------------------------------------- - -.. _install-framework: - -You can install Yardstick framework directly on Ubuntu 14.04 or in an Ubuntu -14.04 Docker image. No matter which way you choose to install Yardstick -framework, the following installation steps are identical. - -If you choose to use the Ubuntu 14.04 Docker image, You can pull the Ubuntu -14.04 Docker image from Docker hub: - -:: - - docker pull ubuntu:14.04 - -Installing Yardstick framework -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Download source code and install python dependencies: - -:: - - git clone https://gerrit.opnfv.org/gerrit/yardstick - cd yardstick - ./install.sh - - -Installing Yardstick using Docker ---------------------------------- - -Yardstick has a Docker image, this Docker image (**Yardstick-stable**) -serves as a replacement for installing the Yardstick framework in a virtual -environment (for example as done in :ref:`install-framework`). -It is recommended to use this Docker image to run Yardstick test. - -Pulling the Yardstick Docker image -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -.. _dockerhub: https://hub.docker.com/r/opnfv/yardstick/ - -Pull the Yardstick Docker image ('opnfv/yardstick') from the public dockerhub -registry under the OPNFV account: [dockerhub_], with the following docker -command:: - - docker pull opnfv/yardstick:stable - -After pulling the Docker image, check that it is available with the -following docker command:: - - [yardsticker@jumphost ~]$ docker images - REPOSITORY TAG IMAGE ID CREATED SIZE - opnfv/yardstick stable a4501714757a 1 day ago 915.4 MB - -Run the Docker image: - -:: - - docker run --privileged=true -it opnfv/yardstick:stable /bin/bash - -In the container the Yardstick repository is located in the /home/opnfv/repos -directory. - - -OpenStack parameters and credentials ------------------------------------- - -Environment variables -^^^^^^^^^^^^^^^^^^^^^ -Before running Yardstick it is necessary to export OpenStack environment variables -from the OpenStack *openrc* file (using the ``source`` command) and export the -external network name ``export EXTERNAL_NETWORK="external-network-name"``, -the default name for the external network is ``net04_ext``. - -Credential environment variables in the *openrc* file have to include at least: - -* OS_AUTH_URL -* OS_USERNAME -* OS_PASSWORD -* OS_TENANT_NAME - -A sample openrc file may look like this: - -* export OS_PASSWORD=console -* export OS_TENANT_NAME=admin -* export OS_AUTH_URL=http://172.16.1.222:35357/v2.0 -* export OS_USERNAME=admin -* export OS_VOLUME_API_VERSION=2 -* export EXTERNAL_NETWORK=net04_ext - - -Yardstick falvor and guest images ---------------------------------- - -Before executing Yardstick test cases, make sure that yardstick guest image and -yardstick flavor are available in OpenStack. -Detailed steps about creating yardstick flavor and building yardstick-trusty-server -image can be found below. - -Yardstick-flavor -^^^^^^^^^^^^^^^^ -Most of the sample test cases in Yardstick are using an OpenStack flavor called -*yardstick-flavor* which deviates from the OpenStack standard m1.tiny flavor by the -disk size - instead of 1GB it has 3GB. Other parameters are the same as in m1.tiny. - -Create yardstick-flavor: - -:: - - nova flavor-create yardstick-flavor 100 512 3 1 - - -.. _guest-image: - -Building a guest image -^^^^^^^^^^^^^^^^^^^^^^ -Most of the sample test cases in Yardstick are using a guest image called -*yardstick-trusty-server* which deviates from an Ubuntu Cloud Server image -containing all the required tools to run test cases supported by Yardstick. -Yardstick has a tool for building this custom image. It is necessary to have -sudo rights to use this tool. - -Also you may need install several additional packages to use this tool, by -follwing the commands below: - -:: - - apt-get update && apt-get install -y \ - qemu-utils \ - kpartx - -This image can be built using the following command while in the directory where -Yardstick is installed (``~/yardstick`` if the framework is installed -by following the commands above): - -:: - - export YARD_IMG_ARCH="amd64" - sudo echo "Defaults env_keep += \"YARD_IMG_ARCH\"" >> /etc/sudoers - sudo ./tools/yardstick-img-modify tools/ubuntu-server-cloudimg-modify.sh - -**Warning:** the script will create files by default in: -``/tmp/workspace/yardstick`` and the files will be owned by root! - -If you are building this guest image in inside a docker container make sure the -container is granted with privilege. - -The created image can be added to OpenStack using the ``glance image-create`` or -via the OpenStack Dashboard. - -Example command: - -:: - - glance --os-image-api-version 1 image-create \ - --name yardstick-image --is-public true \ - --disk-format qcow2 --container-format bare \ - --file /tmp/workspace/yardstick/yardstick-image.img - -Some Yardstick test cases use a Cirros image, you can find one at -http://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-disk.img - - -Automatic flavor and image creation ------------------------------------ -Yardstick has a script for automatic creating yardstick flavor and building -guest images. This script is mainly used in CI, but you can still use it in -your local environment. - -Example command: - -:: - - export YARD_IMG_ARCH="amd64" - sudo echo "Defaults env_keep += \"YARD_IMG_ARCH\"" >> /etc/sudoers - source $YARDSTICK_REPO_DIR/tests/ci/load_images.sh - - -Yardstick default key pair -^^^^^^^^^^^^^^^^^^^^^^^^^^ -Yardstick uses a SSH key pair to connect to the guest image. This key pair can -be found in the ``resources/files`` directory. To run the ``ping-hot.yaml`` test -sample, this key pair needs to be imported to the OpenStack environment. - - -Examples and verifying the install ----------------------------------- - -It is recommended to verify that Yardstick was installed successfully -by executing some simple commands and test samples. Before executing yardstick -test cases make sure yardstick flavor and building yardstick-trusty-server -image can be found in glance and openrc file is sourced. Below is an example -invocation of yardstick help command and ping.py test sample: -:: - - yardstick –h - yardstick task start samples/ping.yaml - -Each testing tool supported by Yardstick has a sample configuration file. -These configuration files can be found in the **samples** directory. - -Default location for the output is ``/tmp/yardstick.out``. - - -Deploy InfluxDB and Grafana locally ------------------------------------- - -.. pull docker images - -Pull docker images - -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -:: - - docker pull tutum/influxdb - docker pull grafana/grafana - -Run influxdb and config -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Run influxdb -:: - - docker run -d --name influxdb \ - -p 8083:8083 -p 8086:8086 --expose 8090 --expose 8099 \ - tutum/influxdb - docker exec -it influxdb bash - -Config influxdb -:: - - influx - >CREATE USER root WITH PASSWORD 'root' WITH ALL PRIVILEGES - >CREATE DATABASE yardstick; - >use yardstick; - >show MEASUREMENTS; - -Run grafana and config -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Run grafana -:: - - docker run -d --name grafana -p 3000:3000 grafana/grafana - -Config grafana -:: - - http://{YOUR_IP_HERE}:3000 - log on using admin/admin and config database resource to be {YOUR_IP_HERE}:8086 - -.. image:: images/Grafana_config.png - :width: 800px - :alt: Grafana data source configration - -Config yardstick conf -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -cp ./etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf - -vi /etc/yardstick/yardstick.conf -Config yardstick.conf -:: - - [DEFAULT] - debug = True - dispatcher = influxdb - - [dispatcher_influxdb] - timeout = 5 - target = http://{YOUR_IP_HERE}:8086 - db_name = yardstick - username = root - password = root - -Now you can run yardstick test cases and store the results in influxdb -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - - -Create a test suite for yardstick ------------------------------------- - -A test suite in yardstick is a yaml file which include one or more test cases. -Yardstick is able to support running test suite task, so you can customize you -own test suite and run it in one task. - -"tests/opnfv/test_suites" is where yardstick put ci test-suite. A typical test -suite is like below: - -fuel_test_suite.yaml - -:: - - --- - # Fuel integration test task suite - - schema: "yardstick:suite:0.1" - - name: "fuel_test_suite" - test_cases_dir: "samples/" - test_cases: - - - file_name: ping.yaml - - - file_name: iperf3.yaml - -As you can see, there are two test cases in fuel_test_suite, the syntax is simple -here, you must specify the schema and the name, then you just need to list the -test cases in the tag "test_cases" and also mark their relative directory in the -tag "test_cases_dir". - -Yardstick test suite also support constraints and task args for each test case. -Here is another sample to show this, which is digested from one big test suite. - -os-nosdn-nofeature-ha.yaml - -:: - - --- - - schema: "yardstick:suite:0.1" - - name: "os-nosdn-nofeature-ha" - test_cases_dir: "tests/opnfv/test_cases/" - test_cases: - - - file_name: opnfv_yardstick_tc002.yaml - - - file_name: opnfv_yardstick_tc005.yaml - - - file_name: opnfv_yardstick_tc043.yaml - constraint: - installer: compass - pod: huawei-pod1 - task_args: - huawei-pod1: '{"pod_info": "etc/yardstick/.../pod.yaml", - "host": "node4.LF","target": "node5.LF"}' - -As you can see in test case "opnfv_yardstick_tc043.yaml", there are two tags, "constraint" and -"task_args". "constraint" is where you can specify which installer or pod it can be run in -the ci environment. "task_args" is where you can specify the task arguments for each pod. - -All in all, to create a test suite in yardstick, you just need to create a suite yaml file -and add test cases and constraint or task arguments if necessary. - diff --git a/docs/userguide/10-yardstick_plugin.rst b/docs/userguide/10-yardstick_plugin.rst deleted file mode 100644 index f16dedd02..000000000 --- a/docs/userguide/10-yardstick_plugin.rst +++ /dev/null @@ -1,144 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Ericsson AB, Huawei Technologies Co.,Ltd and others. - -=================================== -Installing a plug-in into yardstick -=================================== - -Abstract -======== - -Yardstick currently provides a ``plugin`` CLI command to support integration -with other OPNFV testing projects. Below is an example invocation of yardstick -plugin command and Storperf plug-in sample. - - -Installing Storperf into yardstick -================================== - -Storperf is delivered as a Docker container from -https://hub.docker.com/r/opnfv/storperf/tags/. - -There are two possible methods for installation in your environment: - -* Run container on Jump Host -* Run container in a VM - -In this introduction we will install Storperf on Jump Host. - - -Step 0: Environment preparation ->>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> - -Running Storperf on Jump Host -Requirements: - -* Docker must be installed -* Jump Host must have access to the OpenStack Controller API -* Jump Host must have internet connectivity for downloading docker image -* Enough floating IPs must be available to match your agent count - -Before installing Storperf into yardstick you need to check your openstack -environment and other dependencies: - -1. Make sure docker is installed. -2. Make sure Keystone, Nova, Neutron, Glance, Heat are installed correctly. -3. Make sure Jump Host have access to the OpenStack Controller API. -4. Make sure Jump Host must have internet connectivity for downloading docker image. -5. You need to know where to get basic openstack Keystone authorization info, such as - OS_PASSWORD, OS_TENANT_NAME, OS_AUTH_URL, OS_USERNAME. -6. To run a Storperf container, you need to have OpenStack Controller environment - variables defined and passed to Storperf container. The best way to do this is to - put environment variables in a "storperf_admin-rc" file. The storperf_admin-rc - should include credential environment variables at least: - -* OS_AUTH_URL -* OS_TENANT_ID -* OS_TENANT_NAME -* OS_PROJECT_NAME -* OS_USERNAME -* OS_PASSWORD -* OS_REGION_NAME - -For this storperf_admin-rc file, during environment preparation a "prepare_storperf_admin-rc.sh" -script can be used to generate it. -:: - - #!/bin/bash - AUTH_URL=${OS_AUTH_URL} - USERNAME=${OS_USERNAME:-admin} - PASSWORD=${OS_PASSWORD:-console} - TENANT_NAME=${OS_TENANT_NAME:-admin} - VOLUME_API_VERSION=${OS_VOLUME_API_VERSION:-2} - PROJECT_NAME=${OS_PROJECT_NAME:-$TENANT_NAME} - TENANT_ID=`keystone tenant-get admin|grep 'id'|awk -F '|' '{print $3}'|sed -e 's/^[[:space:]]*//'` - rm -f ~/storperf_admin-rc - touch ~/storperf_admin-rc - echo "OS_AUTH_URL="$AUTH_URL >> ~/storperf_admin-rc - echo "OS_USERNAME="$USERNAME >> ~/storperf_admin-rc - echo "OS_PASSWORD="$PASSWORD >> ~/storperf_admin-rc - echo "OS_TENANT_NAME="$TENANT_NAME >> ~/storperf_admin-rc - echo "OS_VOLUME_API_VERSION="$VOLUME_API_VERSION >> ~/storperf_admin-rc - echo "OS_PROJECT_NAME="$PROJECT_NAME >> ~/storperf_admin-rc - echo "OS_TENANT_ID="$TENANT_ID >> ~/storperf_admin-rc - - -Step 1: Plug-in configuration file preparation -++++++++++++++++++++++++++++++++++++++++++++++ - -To install a plug-in, first you need to prepare a plug-in configuration file in -YAML format and store it in the "plugin" directory. The plugin configration file -work as the input of yardstick "plugin" command. Below is the Storperf plug-in -configuration file sample: -:: - - --- - # StorPerf plugin configuration file - # Used for integration StorPerf into Yardstick as a plugin - schema: "yardstick:plugin:0.1" - plugins: - name: storperf - deployment: - ip: 192.168.23.2 - user: root - password: root - -In the plug-in configuration file, you need to specify the plug-in name and the -plug-in deployment info, including node ip, node login username and password. -Here the Storperf will be installed on IP 192.168.23.2 which is the Jump Host -in my local environment. - -Step 2: Plug-in install/remove scripts preparation -++++++++++++++++++++++++++++++++++++++++++++++++++ - -Under "yardstick/resource/scripts directory", there are two folders: a "install" -folder and a "remove" folder. You need to store the plug-in install/remove script -in these two folders respectively. - -The detailed installation or remove operation should de defined in these two scripts. -The name of both install and remove scripts should match the plugin-in name that you -specified in the plug-in configuration file. -For example, the install and remove scripts for Storperf are both named to "storperf.bash". - - -Step 3: Install and remove Storperf -+++++++++++++++++++++++++++++++++++ - -To install Storperf, simply execute the following command -:: - - # Install Storperf - yardstick plugin install plugin/storperf.yaml - -removing Storperf from yardstick -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -To remove Storperf, simply execute the following command -:: - - # Remove Storperf - yardstick plugin remove plugin/storperf.yaml - -What yardstick plugin command does is using the username and password to log into the deployment target and then execute the corresponding install or remove script. diff --git a/docs/userguide/11-result-store-InfluxDB.rst b/docs/userguide/11-result-store-InfluxDB.rst deleted file mode 100644 index a0bb48a80..000000000 --- a/docs/userguide/11-result-store-InfluxDB.rst +++ /dev/null @@ -1,86 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, 2016 Huawei Technologies Co.,Ltd and others. - -============================================== -Store Other Project's Test Results in InfluxDB -============================================== - -Abstract -======== - -.. _Framework: https://wiki.opnfv.org/download/attachments/6827660/wiki.png?version=1&modificationDate=1470298075000&api=v2 - -This chapter illustrates how to run plug-in test cases and store test results -into community's InfluxDB. The framework is shown in Framework_. - - -.. image:: images/InfluxDB_store.png - :width: 800px - :alt: Store Other Project's Test Results in InfluxDB - -Store Storperf Test Results into Community's InfluxDB -===================================================== - -.. _Influxdb: https://git.opnfv.org/cgit/yardstick/tree/yardstick/dispatcher/influxdb.py -.. _Mingjiang: limingjiang@huawei.com -.. _Visual: https://wiki.opnfv.org/download/attachments/6827660/tc074.PNG?version=1&modificationDate=1470298075000&api=v2 -.. _Login: http://testresults.opnfv.org/grafana/login - -As shown in Framework_, there are two ways to store Storperf test results -into community's InfluxDB: - -1. Yardstick asks Storperf to run the test case. After the test case is - completed, Yardstick reads test results via ReST API from Storperf and - posts test data to the influxDB. - -2. Additionally, Storperf can run tests by itself and post the test result - directly to the InfluxDB. The method for posting data directly to influxDB - will be supported in the future. - -Our plan is to support rest-api in D release so that other testing projects can -call the rest-api to use yardstick dispatcher service to push data to yardstick's -influxdb database. - -For now, influxdb only support line protocol, and the json protocol is deprecated. - -Take ping test case for example, the raw_result is json format like this: -:: - - "benchmark": { - "timestamp": 1470315409.868095, - "errors": "", - "data": { - "rtt": { - "ares": 1.125 - } - }, - "sequence": 1 - }, - "runner_id": 2625 - } - -With the help of "influxdb_line_protocol", the json is transform to like below as a line string: -:: - - 'ping,deploy_scenario=unknown,host=athena.demo,installer=unknown,pod_name=unknown, - runner_id=2625,scenarios=Ping,target=ares.demo,task_id=77755f38-1f6a-4667-a7f3- - 301c99963656,version=unknown rtt.ares=1.125 1470315409868094976' - -So, for data output of json format, you just need to transform json into line format and call -influxdb api to post the data into the database. All this function has been implemented in Influxdb_. -If you need support on this, please contact Mingjiang_. -:: - - curl -i -XPOST 'http://104.197.68.199:8086/write?db=yardstick' -- - data-binary 'ping,deploy_scenario=unknown,host=athena.demo,installer=unknown, ...' - -Grafana will be used for visualizing the collected test data, which is shown in Visual_. Grafana -can be accessed by Login_. - - -.. image:: images/results_visualization.png - :width: 800px - :alt: results visualization - diff --git a/docs/userguide/12-grafana.rst b/docs/userguide/12-grafana.rst deleted file mode 100644 index 416857b71..000000000 --- a/docs/userguide/12-grafana.rst +++ /dev/null @@ -1,119 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) 2016 Huawei Technologies Co.,Ltd and others - -================= -Grafana dashboard -================= - - -Abstract -======== - -This chapter describes the Yardstick grafana dashboard. The Yardstick grafana -dashboard can be found here: http://testresults.opnfv.org/grafana/ - - -.. image:: images/login.png - :width: 800px - :alt: Yardstick grafana dashboard - - -Public access -============= - -Yardstick provids a public account for accessing to the dashboard. The username -and password are both set to ‘opnfv’. - - -Testcase dashboard -================== - -For each test case, there is a dedicated dashboard. Shown here is the dashboard -of TC002. - - -.. image:: images/TC002.png - :width: 800px - :alt:TC002 dashboard - -For each test case dashboard. On the top left, we have a dashboard selection, -you can switch to different test cases using this pull-down menu. - -Underneath, we have a pod and scenario selection. -All the pods and scenarios that have ever published test data to the InfluxDB -will be shown here. - -You can check multiple pods or scenarios. - -For each test case, we have a short description and a link to detailed test -case information in Yardstick user guide. - -Underneath, it is the result presentation section. -You can use the time period selection on the top right corner to zoom in or -zoom out the chart. - - -Administration access -===================== - -For a user with administration rights it is easy to update and save any -dashboard configuration. Saved updates immediately take effect and become live. -This may cause issues like: - -- Changes and updates made to the live configuration in Grafana can compromise - existing Grafana content in an unwanted, unpredicted or incompatible way. - Grafana as such is not version controlled, there exists one single Grafana - configuration per dashboard. -- There is a risk several people can disturb each other when doing updates to - the same Grafana dashboard at the same time. - -Any change made by administrator should be careful. - - -Add a dashboard into yardstick grafana -====================================== - -Due to security concern, users that using the public opnfv account are not able -to edit the yardstick grafana directly.It takes a few more steps for a -non-yardstick user to add a custom dashboard into yardstick grafana. - -There are 6 steps to go. - - -.. image:: images/add.png - :width: 800px - :alt: Add a dashboard into yardstick grafana - - -1. You need to build a local influxdb and grafana, so you can do the work - locally. You can refer to How to deploy InfluxDB and Grafana locally wiki - page about how to do this. - -2. Once step one is done, you can fetch the existing grafana dashboard - configuration file from the yardstick repository and import it to your local - grafana. After import is done, you grafana dashboard will be ready to use - just like the community’s dashboard. - -3. The third step is running some test cases to generate test results and - publishing it to your local influxdb. - -4. Now you have some data to visualize in your dashboard. In the fourth step, - it is time to create your own dashboard. You can either modify an existing - dashboard or try to create a new one from scratch. If you choose to modify - an existing dashboard then in the curtain menu of the existing dashboard do - a "Save As..." into a new dashboard copy instance, and then continue doing - all updates and saves within the dashboard copy. - -5. When finished with all Grafana configuration changes in this temporary - dashboard then chose "export" of the updated dashboard copy into a JSON file - and put it up for review in Gerrit, in file /yardstick/dashboard/Yardstick-TCxxx-yyyyyyyyyyyyy. - For instance a typical default name of the file would be "Yardstick-TC001 Copy-1234567891234". - -6. Once you finish your dashboard, the next step is exporting the configuration - file and propose a patch into Yardstick. Yardstick team will review and - merge it into Yardstick repository. After approved review Yardstick team - will do an "import" of the JSON file and also a "save dashboard" as soon as - possible to replace the old live dashboard configuration. - diff --git a/docs/userguide/13-list-of-tcs.rst b/docs/userguide/13-list-of-tcs.rst deleted file mode 100644 index 1b5806cd9..000000000 --- a/docs/userguide/13-list-of-tcs.rst +++ /dev/null @@ -1,129 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Ericsson AB and others. - -==================== -Yardstick Test Cases -==================== - -Abstract -======== - -This chapter lists available Yardstick test cases. -Yardstick test cases are divided in two main categories: - -* *Generic NFVI Test Cases* - Test Cases developed to realize the methodology -described in :doc:`02-methodology` - -* *OPNFV Feature Test Cases* - Test Cases developed to verify one or more -aspect of a feature delivered by an OPNFV Project, including the test cases -developed for the :term:`VTC`. - -Generic NFVI Test Case Descriptions -=================================== - -.. toctree:: - :maxdepth: 1 - - opnfv_yardstick_tc001.rst - opnfv_yardstick_tc002.rst - opnfv_yardstick_tc004.rst - opnfv_yardstick_tc005.rst - opnfv_yardstick_tc008.rst - opnfv_yardstick_tc009.rst - opnfv_yardstick_tc010.rst - opnfv_yardstick_tc011.rst - opnfv_yardstick_tc012.rst - opnfv_yardstick_tc014.rst - opnfv_yardstick_tc024.rst - opnfv_yardstick_tc037.rst - opnfv_yardstick_tc038.rst - opnfv_yardstick_tc042.rst - opnfv_yardstick_tc043.rst - opnfv_yardstick_tc044.rst - opnfv_yardstick_tc055.rst - opnfv_yardstick_tc061.rst - opnfv_yardstick_tc063.rst - opnfv_yardstick_tc069.rst - opnfv_yardstick_tc070.rst - opnfv_yardstick_tc071.rst - opnfv_yardstick_tc072.rst - opnfv_yardstick_tc073.rst - opnfv_yardstick_tc075.rst - opnfv_yardstick_tc076.rst - -OPNFV Feature Test Cases -======================== - -H A ---- - -.. toctree:: - :maxdepth: 1 - - opnfv_yardstick_tc019.rst - opnfv_yardstick_tc025.rst - opnfv_yardstick_tc045.rst - opnfv_yardstick_tc046.rst - opnfv_yardstick_tc047.rst - opnfv_yardstick_tc048.rst - opnfv_yardstick_tc049.rst - opnfv_yardstick_tc050.rst - opnfv_yardstick_tc051.rst - opnfv_yardstick_tc052.rst - opnfv_yardstick_tc053.rst - opnfv_yardstick_tc054.rst - -IPv6 ----- - -.. toctree:: - :maxdepth: 1 - - opnfv_yardstick_tc027.rst - -KVM ---- - -.. toctree:: - :maxdepth: 1 - - opnfv_yardstick_tc028.rst - -Parser ------- - -.. toctree:: - :maxdepth: 1 - - opnfv_yardstick_tc040.rst - - StorPerf ------------ - -.. toctree:: - :maxdepth: 1 - - opnfv_yardstick_tc074.rst - -virtual Traffic Classifier --------------------------- - -.. toctree:: - :maxdepth: 1 - - opnfv_yardstick_tc006.rst - opnfv_yardstick_tc007.rst - opnfv_yardstick_tc020.rst - opnfv_yardstick_tc021.rst - -Templates -========= - -.. toctree:: - :maxdepth: 1 - - testcase_description_v2_template - Yardstick_task_templates - diff --git a/docs/userguide/Yardstick_task_templates.rst b/docs/userguide/Yardstick_task_templates.rst deleted file mode 100755 index e8130dd2a..000000000 --- a/docs/userguide/Yardstick_task_templates.rst +++ /dev/null @@ -1,160 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Huawei Technologies Co.,Ltd and others. - -Task Template Syntax -==================== - -Basic template syntax ---------------------- -A nice feature of the input task format used in Yardstick is that it supports -the template syntax based on Jinja2. -This turns out to be extremely useful when, say, you have a fixed structure of -your task but you want to parameterize this task in some way. -For example, imagine your input task file (task.yaml) runs a set of Ping -scenarios: - -:: - - # Sample benchmark task config file - # measure network latency using ping - schema: "yardstick:task:0.1" - - scenarios: - - - type: Ping - options: - packetsize: 200 - host: athena.demo - target: ares.demo - - runner: - type: Duration - duration: 60 - interval: 1 - - sla: - max_rtt: 10 - action: monitor - - context: - ... - -Let's say you want to run the same set of scenarios with the same runner/ -context/sla, but you want to try another packetsize to compare the performance. -The most elegant solution is then to turn the packetsize name into a template -variable: - -:: - - # Sample benchmark task config file - # measure network latency using ping - - schema: "yardstick:task:0.1" - scenarios: - - - type: Ping - options: - packetsize: {{packetsize}} - host: athena.demo - target: ares.demo - - runner: - type: Duration - duration: 60 - interval: 1 - - sla: - max_rtt: 10 - action: monitor - - context: - ... - -and then pass the argument value for {{packetsize}} when starting a task with -this configuration file. -Yardstick provides you with different ways to do that: - -1.Pass the argument values directly in the command-line interface (with either -a JSON or YAML dictionary): - -:: - - yardstick task start samples/ping-template.yaml - --task-args'{"packetsize":"200"}' - -2.Refer to a file that specifies the argument values (JSON/YAML): - -:: - - yardstick task start samples/ping-template.yaml --task-args-file args.yaml - -Using the default values ------------------------- -Note that the Jinja2 template syntax allows you to set the default values for -your parameters. -With default values set, your task file will work even if you don't -parameterize it explicitly while starting a task. -The default values should be set using the {% set ... %} clause (task.yaml). -For example: - -:: - - # Sample benchmark task config file - # measure network latency using ping - schema: "yardstick:task:0.1" - {% set packetsize = packetsize or "100" %} - scenarios: - - - type: Ping - options: - packetsize: {{packetsize}} - host: athena.demo - target: ares.demo - - runner: - type: Duration - duration: 60 - interval: 1 - ... - -If you don't pass the value for {{packetsize}} while starting a task, the -default one will be used. - -Advanced templates ------------------- - -Yardstick makes it possible to use all the power of Jinja2 template syntax, -including the mechanism of built-in functions. -As an example, let us make up a task file that will do a block storage -performance test. -The input task file (fio-template.yaml) below uses the Jinja2 for-endfor -construct to accomplish that: - -:: - - #Test block sizes of 4KB, 8KB, 64KB, 1MB - #Test 5 workloads: read, write, randwrite, randread, rw - schema: "yardstick:task:0.1" - - scenarios: - {% for bs in ['4k', '8k', '64k', '1024k' ] %} - {% for rw in ['read', 'write', 'randwrite', 'randread', 'rw' ] %} - - - type: Fio - options: - filename: /home/ubuntu/data.raw - bs: {{bs}} - rw: {{rw}} - ramp_time: 10 - host: fio.demo - runner: - type: Duration - duration: 60 - interval: 60 - - {% endfor %} - {% endfor %} - context - ... diff --git a/docs/userguide/comp-intro.rst b/docs/userguide/comp-intro.rst deleted file mode 100644 index ee68226ad..000000000 --- a/docs/userguide/comp-intro.rst +++ /dev/null @@ -1,37 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Ericsson AB and others. - -========= -Yardstick -========= - -.. _Yardstick: https://wiki.opnfv.org/yardstick -.. _Presentation: https://wiki.opnfv.org/_media/opnfv_summit_-_yardstick_project.pdf -.. _NFV-TST001: https://docbox.etsi.org/ISG/NFV/Open/Drafts/TST001_-_Pre-deployment_Validation/ -.. _Yardsticktst: https://wiki.opnfv.org/_media/opnfv_summit_-_bridging_opnfv_and_etsi.pdf - -The project's goal is to verify infrastructure compliance, from the perspective -of a Virtual Network Function (VNF). - -The Project's scope is the development of a test framework, *Yardstick*, test -cases and test stimuli to enable Network Function Virtualization Infrastructure -(NFVI) verification. - -In OPNFV Brahmaputra release, generic test cases covering aspects of the -metrics in the document ETSI GS NFV-TST001_, "Pre-deployment Testing; Report on -Validation of NFV Environments and Services" are available; further OPNFV -releases will provide extended testing of these metrics. - -The Project also includes a sample VNF, the Virtual Traffic Classifier (VTC) -and its experimental framework, *ApexLake*. - -*Yardstick* is used in OPNFV for verifying the OPNFV infrastructure and some of -the OPNFV features. The *Yardstick* framework is deployed in several OPNFV -community labs. It is *installer*, *infrastructure* and *application* -independent. - - -.. seealso:: This Presentation_ for an overview of *Yardstick* and - Yardsticktst_ for material on alignment ETSI TST001 and Yardstick. diff --git a/docs/userguide/glossary.rst b/docs/userguide/glossary.rst deleted file mode 100644 index f8ff41887..000000000 --- a/docs/userguide/glossary.rst +++ /dev/null @@ -1,65 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Ericsson AB and others. - -======== -Glossary -======== - -.. glossary:: - :sorted: - - API - Application Programming Interface - - DPI - Deep Packet Inspection - - DPDK - Data Plane Development Kit - - DSCP - Differentiated Services Code Point - - IGMP - Internet Group Management Protocol - - IOPS - Input/Output Operations Per Second - - NIC - Network Interface Controller - - PBFS - Packet Based per Flow State - - QoS - Quality of Service - - VLAN - Virtual LAN - - VM - Virtual Machine - - VNF - Virtual Network Function - - VNFC - Virtual Network Function Component - - NFVI - Network Function Virtualization Infrastructure - - SR-IOV - Single Root IO Virtualization - - SUT - System Under Test - - ToS - Type of Service - - VTC - Virtual Traffic Classifier diff --git a/docs/userguide/images/Deployment.png b/docs/userguide/images/Deployment.png deleted file mode 100755 index aca5670cd..000000000 Binary files a/docs/userguide/images/Deployment.png and /dev/null differ diff --git a/docs/userguide/images/Grafana_config.png b/docs/userguide/images/Grafana_config.png deleted file mode 100644 index cb63098dc..000000000 Binary files a/docs/userguide/images/Grafana_config.png and /dev/null differ diff --git a/docs/userguide/images/InfluxDB_store.png b/docs/userguide/images/InfluxDB_store.png deleted file mode 100644 index 1770fd255..000000000 Binary files a/docs/userguide/images/InfluxDB_store.png and /dev/null differ diff --git a/docs/userguide/images/Logical_view.png b/docs/userguide/images/Logical_view.png deleted file mode 100644 index cdb805448..000000000 Binary files a/docs/userguide/images/Logical_view.png and /dev/null differ diff --git a/docs/userguide/images/TC002.png b/docs/userguide/images/TC002.png deleted file mode 100644 index 89154efcc..000000000 Binary files a/docs/userguide/images/TC002.png and /dev/null differ diff --git a/docs/userguide/images/Use_case.png b/docs/userguide/images/Use_case.png deleted file mode 100644 index acd52f526..000000000 Binary files a/docs/userguide/images/Use_case.png and /dev/null differ diff --git a/docs/userguide/images/add.png b/docs/userguide/images/add.png deleted file mode 100644 index a88a1b146..000000000 Binary files a/docs/userguide/images/add.png and /dev/null differ diff --git a/docs/userguide/images/login.png b/docs/userguide/images/login.png deleted file mode 100644 index 045e010e4..000000000 Binary files a/docs/userguide/images/login.png and /dev/null differ diff --git a/docs/userguide/images/results_visualization.png b/docs/userguide/images/results_visualization.png deleted file mode 100644 index cd092808b..000000000 Binary files a/docs/userguide/images/results_visualization.png and /dev/null differ diff --git a/docs/userguide/images/test_execution_flow.png b/docs/userguide/images/test_execution_flow.png deleted file mode 100644 index c20a931a4..000000000 Binary files a/docs/userguide/images/test_execution_flow.png and /dev/null differ diff --git a/docs/userguide/index.rst b/docs/userguide/index.rst deleted file mode 100644 index 826a9d9bf..000000000 --- a/docs/userguide/index.rst +++ /dev/null @@ -1,27 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Ericsson AB and others. - -================== -Yardstick Overview -================== - -.. toctree:: - :maxdepth: 2 - - 01-introduction - 02-methodology - 03-architecture - 04-vtc-overview - 05-apexlake_installation - 06-apexlake_api - 07-nsb-overview - 08-nsb_installation - 09-installation - 10-yardstick_plugin - 11-result-store-InfluxDB - 12-grafana - 13-list-of-tcs - glossary - references diff --git a/docs/userguide/opnfv_yardstick_tc001.rst b/docs/userguide/opnfv_yardstick_tc001.rst deleted file mode 100644 index b53c508a6..000000000 --- a/docs/userguide/opnfv_yardstick_tc001.rst +++ /dev/null @@ -1,133 +0,0 @@ -s work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Ericsson AB and others. - -************************************* -Yardstick Test Case Description TC001 -************************************* - -.. _pktgen: https://www.kernel.org/doc/Documentation/networking/pktgen.txt - -+-----------------------------------------------------------------------------+ -|Network Performance | -| | -+--------------+--------------------------------------------------------------+ -|test case id | OPNFV_YARDSTICK_TC001_NETWORK PERFORMANCE | -| | | -+--------------+--------------------------------------------------------------+ -|metric | Number of flows and throughput | -| | | -+--------------+--------------------------------------------------------------+ -|test purpose | The purpose of TC001 is to evaluate the IaaS network | -| | performance with regards to flows and throughput, such as if | -| | and how different amounts of flows matter for the throughput | -| | between hosts on different compute blades. Typically e.g. | -| | the performance of a vSwitch depends on the number of flows | -| | running through it. Also performance of other equipment or | -| | entities can depend on the number of flows or the packet | -| | sizes used. | -| | | -| | The purpose is also to be able to spot the trends. | -| | Test results, graphs and similar shall be stored for | -| | comparison reasons and product evolution understanding | -| | between different OPNFV versions and/or configurations. | -| | | -+--------------+--------------------------------------------------------------+ -|test tool | pktgen | -| | | -| | Linux packet generator is a tool to generate packets at very | -| | high speed in the kernel. pktgen is mainly used to drive and | -| | LAN equipment test network. pktgen supports multi threading. | -| | To generate random MAC address, IP address, port number UDP | -| | packets, pktgen uses multiple CPU processors in the | -| | different PCI bus (PCI, PCIe bus) with Gigabit Ethernet | -| | tested (pktgen performance depends on the CPU processing | -| | speed, memory delay, PCI bus speed hardware parameters), | -| | Transmit data rate can be even larger than 10GBit/s. Visible | -| | can satisfy most card test requirements. | -| | | -| | (Pktgen is not always part of a Linux distribution, hence it | -| | needs to be installed. It is part of the Yardstick Docker | -| | image. | -| | As an example see the /yardstick/tools/ directory for how | -| | to generate a Linux image with pktgen included.) | -| | | -+--------------+--------------------------------------------------------------+ -|test | This test case uses Pktgen to generate packet flow between | -|description | two hosts for simulating network workloads on the SUT. | -| | | -+--------------+--------------------------------------------------------------+ -|traffic | An IP table is setup on server to monitor for received | -|profile | packets. | -| | | -+--------------+--------------------------------------------------------------+ -|configuration | file: opnfv_yardstick_tc001.yaml | -| | | -| | Packet size is set to 60 bytes. | -| | Number of ports: 10, 50, 100, 500 and 1000, where each | -| | runs for 20 seconds. The whole sequence is run twice | -| | The client and server are distributed on different hardware. | -| | | -| | For SLA max_ppm is set to 1000. The amount of configured | -| | ports map to between 110 up to 1001000 flows, respectively. | -| | | -+--------------+--------------------------------------------------------------+ -|applicability | Test can be configured with different: | -| | | -| | * packet sizes; | -| | * amount of flows; | -| | * test duration. | -| | | -| | Default values exist. | -| | | -| | SLA (optional): max_ppm: The number of packets per million | -| | packets sent that are acceptable to loose, not received. | -| | | -+--------------+--------------------------------------------------------------+ -|usability | This test case is used for generating high network | -| | throughput to simulate certain workloads on the SUT. Hence | -| | it should work with other test cases. | -| | | -+--------------+--------------------------------------------------------------+ -|references | pktgen_ | -| | | -| | ETSI-NFV-TST001 | -| | | -+--------------+--------------------------------------------------------------+ -|pre-test | The test case image needs to be installed into Glance | -|conditions | with pktgen included in it. | -| | | -| | No POD specific requirements have been identified. | -| | | -+--------------+--------------------------------------------------------------+ -|test sequence | description and expected result | -| | | -+--------------+--------------------------------------------------------------+ -|step 1 | Two host VMs are booted, as server and client. | -| | | -+--------------+--------------------------------------------------------------+ -|step 2 | Yardstick is connected with the server VM by using ssh. | -| | 'pktgen_benchmark' bash script is copyied from Jump Host to | -| | the server VM via the ssh tunnel. | -| | | -+--------------+--------------------------------------------------------------+ -|step 3 | An IP table is setup on server to monitor for received | -| | packets. | -| | | -+--------------+--------------------------------------------------------------+ -|step 4 | pktgen is invoked to generate packet flow between two server | -| | and client for simulating network workloads on the SUT. | -| | Results are processed and checked against the SLA. Logs are | -| | produced and stored. | -| | | -| | Result: Logs are stored. | -| | | -+--------------+--------------------------------------------------------------+ -|step 5 | Two host VMs are deleted. | -| | | -+--------------+--------------------------------------------------------------+ -|test verdict | Fails only if SLA is not passed, or if there is a test case | -| | execution problem. | -| | | -+--------------+--------------------------------------------------------------+ diff --git a/docs/userguide/opnfv_yardstick_tc002.rst b/docs/userguide/opnfv_yardstick_tc002.rst deleted file mode 100644 index c98780fd5..000000000 --- a/docs/userguide/opnfv_yardstick_tc002.rst +++ /dev/null @@ -1,126 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Ericsson AB and others. - -************************************* -Yardstick Test Case Description TC002 -************************************* - -.. _cirros-image: https://download.cirros-cloud.net -.. _Ping: https://linux.die.net/man/8/ping - -+-----------------------------------------------------------------------------+ -|Network Latency | -| | -+--------------+--------------------------------------------------------------+ -|test case id | OPNFV_YARDSTICK_TC002_NETWORK LATENCY | -| | | -+--------------+--------------------------------------------------------------+ -|metric | RTT (Round Trip Time) | -| | | -+--------------+--------------------------------------------------------------+ -|test purpose | The purpose of TC002 is to do a basic verification that | -| | network latency is within acceptable boundaries when packets | -| | travel between hosts located on same or different compute | -| | blades. | -| | | -| | The purpose is also to be able to spot the trends. | -| | Test results, graphs and similar shall be stored for | -| | comparison reasons and product evolution understanding | -| | between different OPNFV versions and/or configurations. | -| | | -+--------------+--------------------------------------------------------------+ -|test tool | ping | -| | | -| | Ping is a computer network administration software utility | -| | used to test the reachability of a host on an Internet | -| | Protocol (IP) network. It measures the round-trip time for | -| | packet sent from the originating host to a destination | -| | computer that are echoed back to the source. | -| | | -| | Ping is normally part of any Linux distribution, hence it | -| | doesn't need to be installed. It is also part of the | -| | Yardstick Docker image. | -| | (For example also a Cirros image can be downloaded from | -| | cirros-image_, it includes ping) | -| | | -+--------------+--------------------------------------------------------------+ -|test topology | Ping packets (ICMP protocol's mandatory ECHO_REQUEST | -| | datagram) are sent from host VM to target VM(s) to elicit | -| | ICMP ECHO_RESPONSE. | -| | | -| | For one host VM there can be multiple target VMs. | -| | Host VM and target VM(s) can be on same or different compute | -| | blades. | -| | | -+--------------+--------------------------------------------------------------+ -|configuration | file: opnfv_yardstick_tc002.yaml | -| | | -| | Packet size 100 bytes. Test duration 60 seconds. | -| | One ping each 10 seconds. Test is iterated two times. | -| | SLA RTT is set to maximum 10 ms. | -| | | -+--------------+--------------------------------------------------------------+ -|applicability | This test case can be configured with different: | -| | | -| | * packet sizes; | -| | * burst sizes; | -| | * ping intervals; | -| | * test durations; | -| | * test iterations. | -| | | -| | Default values exist. | -| | | -| | SLA is optional. The SLA in this test case serves as an | -| | example. Considerably lower RTT is expected, and also normal | -| | to achieve in balanced L2 environments. However, to cover | -| | most configurations, both bare metal and fully virtualized | -| | ones, this value should be possible to achieve and | -| | acceptable for black box testing. Many real time | -| | applications start to suffer badly if the RTT time is higher | -| | than this. Some may suffer bad also close to this RTT, while | -| | others may not suffer at all. It is a compromise that may | -| | have to be tuned for different configuration purposes. | -| | | -+--------------+--------------------------------------------------------------+ -|usability | This test case is one of Yardstick's generic test. Thus it | -| | is runnable on most of the scenarios. | -| | | -+--------------+--------------------------------------------------------------+ -|references | Ping_ | -| | | -| | ETSI-NFV-TST001 | -| | | -+--------------+--------------------------------------------------------------+ -|pre-test | The test case image (cirros-image) needs to be installed | -|conditions | into Glance with ping included in it. | -| | | -| | No POD specific requirements have been identified. | -| | | -+--------------+--------------------------------------------------------------+ -|test sequence | description and expected result | -| | | -+--------------+--------------------------------------------------------------+ -|step 1 | Two host VMs are booted, as server and client. | -| | | -+--------------+--------------------------------------------------------------+ -|step 2 | Yardstick is connected with the server VM by using ssh. | -| | 'ping_benchmark' bash script is copyied from Jump Host to | -| | the server VM via the ssh tunnel. | -| | | -+--------------+--------------------------------------------------------------+ -|step 3 | Ping is invoked. Ping packets are sent from server VM to | -| | client VM. RTT results are calculated and checked against | -| | the SLA. Logs are produced and stored. | -| | | -| | Result: Logs are stored. | -| | | -+--------------+--------------------------------------------------------------+ -|step 4 | Two host VMs are deleted. | -| | | -+--------------+--------------------------------------------------------------+ -|test verdict | Test should not PASS if any RTT is above the optional SLA | -| | value, or if there is a test case execution problem. | -| | | -+--------------+--------------------------------------------------------------+ diff --git a/docs/userguide/opnfv_yardstick_tc004.rst b/docs/userguide/opnfv_yardstick_tc004.rst deleted file mode 100644 index 3554b3826..000000000 --- a/docs/userguide/opnfv_yardstick_tc004.rst +++ /dev/null @@ -1,110 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Huawei Technologies Co.,Ltd and others. - -************************************* -Yardstick Test Case Description TC004 -************************************* - -.. _cachestat: https://github.com/brendangregg/perf-tools/tree/master/fs - -+-----------------------------------------------------------------------------+ -|Cache Utilization | -| | -+--------------+--------------------------------------------------------------+ -|test case id | OPNFV_YARDSTICK_TC004_CACHE Utilization | -| | | -+--------------+--------------------------------------------------------------+ -|metric | cache hit, cache miss, hit/miss ratio, buffer size and page | -| | cache size | -| | | -+--------------+--------------------------------------------------------------+ -|test purpose | The purpose of TC004 is to evaluate the IaaS compute | -| | capability with regards to cache utilization.This test case | -| | should be run in parallel with other Yardstick test cases | -| | and not run as a stand-alone test case. | -| | | -| | This test case measures cache usage statistics, including | -| | cache hit, cache miss, hit ratio, buffer cache size and page | -| | cache size, with some wokloads runing on the infrastructure. | -| | Both average and maximun values are collected. | -| | | -| | The purpose is also to be able to spot the trends. | -| | Test results, graphs and similar shall be stored for | -| | comparison reasons and product evolution understanding | -| | between different OPNFV versions and/or configurations. | -| | | -+--------------+--------------------------------------------------------------+ -|test tool | cachestat | -| | | -| | cachestat is a tool using Linux ftrace capabilities for | -| | showing Linux page cache hit/miss statistics. | -| | | -| | (cachestat is not always part of a Linux distribution, hence | -| | it needs to be installed. As an example see the | -| | /yardstick/tools/ directory for how to generate a Linux | -| | image with cachestat included.) | -| | | -+--------------+--------------------------------------------------------------+ -|test | cachestat test is invoked in a host VM on a compute blade, | -|description | cachestat test requires some other test cases running in the | -| | host to stimulate workload. | -| | | -+--------------+--------------------------------------------------------------+ -|configuration | File: cachestat.yaml (in the 'samples' directory) | -| | | -| | Interval is set 1. Test repeat, pausing every 1 seconds | -| | in-between. | -| | Test durarion is set to 60 seconds. | -| | | -| | SLA is not available in this test case. | -| | | -+--------------+--------------------------------------------------------------+ -|applicability | Test can be configured with different: | -| | | -| | * interval; | -| | * runner Duration. | -| | | -| | Default values exist. | -| | | -+--------------+--------------------------------------------------------------+ -|usability | This test case is one of Yardstick's generic test. Thus it | -| | is runnable on most of the scenarios. | -| | | -+--------------+--------------------------------------------------------------+ -|references | cachestat_ | -| | | -| | ETSI-NFV-TST001 | -| | | -+--------------+--------------------------------------------------------------+ -|pre-test | The test case image needs to be installed into Glance | -|conditions | with cachestat included in the image. | -| | | -| | No POD specific requirements have been identified. | -| | | -+--------------+--------------------------------------------------------------+ -|test sequence | description and expected result | -| | | -+--------------+--------------------------------------------------------------+ -|step 1 | A host VM with cachestat installed is booted. | -| | | -+--------------+--------------------------------------------------------------+ -|step 2 | Yardstick is connected with the host VM by using ssh. | -| | 'cache_stat' bash script is copyied from Jump Host to | -| | the server VM via the ssh tunnel. | -| | | -+--------------+--------------------------------------------------------------+ -|step 3 | 'cache_stat' script is invoked. Raw cache usage statistics | -| | are collected and filtrated. Average and maximum values are | -| | calculated and recorded. Logs are produced and stored. | -| | | -| | Result: Logs are stored. | -| | | -+--------------+--------------------------------------------------------------+ -|step 4 | The host VM is deleted. | -| | | -+--------------+--------------------------------------------------------------+ -|test verdict | None. Cache utilization results are collected and stored. | -| | | -+--------------+--------------------------------------------------------------+ diff --git a/docs/userguide/opnfv_yardstick_tc005.rst b/docs/userguide/opnfv_yardstick_tc005.rst deleted file mode 100644 index 1c2d71d81..000000000 --- a/docs/userguide/opnfv_yardstick_tc005.rst +++ /dev/null @@ -1,125 +0,0 @@ -. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Huawei Technologies Co.,Ltd and others. - -************************************* -Yardstick Test Case Description TC005 -************************************* - -.. _fio: http://bluestop.org/files/fio/HOWTO.txt - -+-----------------------------------------------------------------------------+ -|Storage Performance | -| | -+--------------+--------------------------------------------------------------+ -|test case id | OPNFV_YARDSTICK_TC005_STORAGE PERFORMANCE | -| | | -+--------------+--------------------------------------------------------------+ -|metric | IOPS (Average IOs performed per second), | -| | Throughput (Average disk read/write bandwidth rate), | -| | Latency (Average disk read/write latency) | -| | | -+--------------+--------------------------------------------------------------+ -|test purpose | The purpose of TC005 is to evaluate the IaaS storage | -| | performance with regards to IOPS, throughput and latency. | -| | | -| | The purpose is also to be able to spot the trends. | -| | Test results, graphs and similar shall be stored for | -| | comparison reasons and product evolution understanding | -| | between different OPNFV versions and/or configurations. | -| | | -+--------------+--------------------------------------------------------------+ -|test tool | fio | -| | | -| | fio is an I/O tool meant to be used both for benchmark and | -| | stress/hardware verification. It has support for 19 | -| | different types of I/O engines (sync, mmap, libaio, | -| | posixaio, SG v3, splice, null, network, syslet, guasi, | -| | solarisaio, and more), I/O priorities (for newer Linux | -| | kernels), rate I/O, forked or threaded jobs, and much more. | -| | | -| | (fio is not always part of a Linux distribution, hence it | -| | needs to be installed. As an example see the | -| | /yardstick/tools/ directory for how to generate a Linux | -| | image with fio included.) | -| | | -+--------------+--------------------------------------------------------------+ -|test | fio test is invoked in a host VM on a compute blade, a job | -|description | file as well as parameters are passed to fio and fio will | -| | start doing what the job file tells it to do. | -| | | -+--------------+--------------------------------------------------------------+ -|configuration | file: opnfv_yardstick_tc005.yaml | -| | | -| | IO types is set to read, write, randwrite, randread, rw. | -| | IO block size is set to 4KB, 64KB, 1024KB. | -| | fio is run for each IO type and IO block size scheme, | -| | each iteration runs for 30 seconds (10 for ramp time, 20 for | -| | runtime). | -| | | -| | For SLA, minimum read/write iops is set to 100, | -| | minimum read/write throughput is set to 400 KB/s, | -| | and maximum read/write latency is set to 20000 usec. | -| | | -+--------------+--------------------------------------------------------------+ -|applicability | This test case can be configured with different: | -| | | -| | * IO types; | -| | * IO block size; | -| | * IO depth; | -| | * ramp time; | -| | * test duration. | -| | | -| | Default values exist. | -| | | -| | SLA is optional. The SLA in this test case serves as an | -| | example. Considerably higher throughput and lower latency | -| | are expected. However, to cover most configurations, both | -| | baremetal and fully virtualized ones, this value should be | -| | possible to achieve and acceptable for black box testing. | -| | Many heavy IO applications start to suffer badly if the | -| | read/write bandwidths are lower than this. | -| | | -+--------------+--------------------------------------------------------------+ -|usability | This test case is one of Yardstick's generic test. Thus it | -| | is runnable on most of the scenarios. | -| | | -+--------------+--------------------------------------------------------------+ -|references | fio_ | -| | | -| | ETSI-NFV-TST001 | -| | | -+--------------+--------------------------------------------------------------+ -|pre-test | The test case image needs to be installed into Glance | -|conditions | with fio included in it. | -| | | -| | No POD specific requirements have been identified. | -| | | -+--------------+--------------------------------------------------------------+ -|test sequence | description and expected result | -| | | -+--------------+--------------------------------------------------------------+ -|step 1 | A host VM with fio installed is booted. | -| | | -+--------------+--------------------------------------------------------------+ -|step 2 | Yardstick is connected with the host VM by using ssh. | -| | 'fio_benchmark' bash script is copyied from Jump Host to | -| | the host VM via the ssh tunnel. | -| | | -+--------------+--------------------------------------------------------------+ -|step 3 | 'fio_benchmark' script is invoked. Simulated IO operations | -| | are started. IOPS, disk read/write bandwidth and latency are | -| | recorded and checked against the SLA. Logs are produced and | -| | stored. | -| | | -| | Result: Logs are stored. | -| | | -+--------------+--------------------------------------------------------------+ -|step 4 | The host VM is deleted. | -| | | -+--------------+--------------------------------------------------------------+ -|test verdict | Fails only if SLA is not passed, or if there is a test case | -| | execution problem. | -| | | -+--------------+--------------------------------------------------------------+ diff --git a/docs/userguide/opnfv_yardstick_tc006.rst b/docs/userguide/opnfv_yardstick_tc006.rst deleted file mode 100644 index 2ccb417c1..000000000 --- a/docs/userguide/opnfv_yardstick_tc006.rst +++ /dev/null @@ -1,144 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Intel Corporation and others. - -************************************* -Yardstick Test Case Description TC006 -************************************* - -.. _DPDKpktgen: https://github.com/Pktgen/Pktgen-DPDK/ -.. _rfc2544: https://www.ietf.org/rfc/rfc2544.txt - -+-----------------------------------------------------------------------------+ -|Network Performance | -| | -+--------------+--------------------------------------------------------------+ -|test case id | OPNFV_YARDSTICK_TC006_Virtual Traffic Classifier Data Plane | -| | Throughput Benchmarking Test. | -| | | -+--------------+--------------------------------------------------------------+ -|metric | Throughput | -| | | -+--------------+--------------------------------------------------------------+ -|test purpose | To measure the throughput supported by the virtual Traffic | -| | Classifier according to the RFC2544 methodology for a | -| | user-defined set of vTC deployment configurations. | -| | | -+--------------+--------------------------------------------------------------+ -|configuration | file: file: opnfv_yardstick_tc006.yaml | -| | | -| | packet_size: size of the packets to be used during the | -| | throughput calculation. | -| | Allowe values: [64, 128, 256, 512, 1024, 1280, 1518] | -| | | -| | vnic_type: type of VNIC to be used. | -| | Allowed values are: | -| | - normal: for default OvS port configuration | -| | - direct: for SR-IOV port configuration | -| | Default value: None | -| | | -| | vtc_flavor: OpenStack flavor to be used for the vTC | -| | Default available values are: m1.small, m1.medium, | -| | and m1.large, but the user can create his/her own | -| | flavor and give it as input | -| | Default value: None | -| | | -| | vlan_sender: vlan tag of the network on which the vTC will | -| | receive traffic (VLAN Network 1). | -| | Allowed values: range (1, 4096) | -| | | -| | vlan_receiver: vlan tag of the network on which the vTC | -| | will send traffic back to the packet generator | -| | (VLAN Network 2). | -| | Allowed values: range (1, 4096) | -| | | -| | default_net_name: neutron name of the defaul network that | -| | is used for access to the internet from the vTC | -| | (vNIC 1). | -| | | -| | default_subnet_name: subnet name for vNIC1 | -| | (information available through Neutron). | -| | | -| | vlan_net_1_name: Neutron Name for VLAN Network 1 | -| | (information available through Neutron). | -| | | -| | vlan_subnet_1_name: Subnet Neutron name for VLAN Network 1 | -| | (information available through Neutron). | -| | | -| | vlan_net_2_name: Neutron Name for VLAN Network 2 | -| | (information available through Neutron). | -| | | -| | vlan_subnet_2_name: Subnet Neutron name for VLAN Network 2 | -| | (information available through Neutron). | -| | | -+--------------+--------------------------------------------------------------+ -|test tool | DPDK pktgen | -| | | -| | DPDK Pktgen is not part of a Linux distribution, | -| | hence it needs to be installed by the user. | -| | | -+--------------+--------------------------------------------------------------+ -|references | DPDK Pktgen: DPDKpktgen_ | -| | | -| | ETSI-NFV-TST001 | -| | | -| | RFC 2544: rfc2544_ | -| | | -+--------------+--------------------------------------------------------------+ -|applicability | Test can be configured with different flavors, vNIC type | -| | and packet sizes. Default values exist as specified above. | -| | The vNIC type and flavor MUST be specified by the user. | -| | | -+--------------+--------------------------------------------------------------+ -|pre-test | The vTC has been successfully instantiated and configured. | -| | The user has correctly assigned the values to the deployment | -| | configuration parameters. | -| | | -| | - Multicast traffic MUST be enabled on the network. | -| | The Data network switches need to be configured in | -| | order to manage multicast traffic. | -| | - In the case of SR-IOV vNICs use, SR-IOV compatible NICs | -| | must be used on the compute node. | -| | - Yarsdtick needs to be installed on a host connected to the | -| | data network and the host must have 2 DPDK-compatible | -| | NICs. Proper configuration of DPDK and DPDK pktgen is | -| | required before to run the test case. | -| | (For further instructions please refer to the ApexLake | -| | documentation). | -| | | -+--------------+--------------------------------------------------------------+ -|test sequence | Description and expected results | -| | | -+--------------+--------------------------------------------------------------+ -|step 1 | The vTC is deployed, according to the user-defined | -| | configuration | -| | | -+--------------+--------------------------------------------------------------+ -|step 2 | The vTC is correctly deployed and configured as necessary | -| | The initialization script has been correctly executed and | -| | vTC is ready to receive and process the traffic. | -| | | -+--------------+--------------------------------------------------------------+ -|step 3 | Test case is executed with the selected parameters: | -| | - vTC flavor | -| | - vNIC type | -| | - packet size | -| | The traffic is sent to the vTC using the maximum available | -| | traffic rate for 60 seconds. | -| | | -+--------------+--------------------------------------------------------------+ -|step 4 | The vTC instance forwards all the packets back to the packet | -| | generator for 60 seconds, as specified by RFC 2544. | -| | | -| | Steps 3 and 4 are executed different times, with different | -| | rates in order to find the maximum supported traffic rate | -| | according to the current definition of throughput in RFC | -| | 2544. | -| | | -+--------------+--------------------------------------------------------------+ -|test verdict | The result of the test is a number between 0 and 100 which | -| | represents the throughput in terms of percentage of the | -| | available pktgen NIC bandwidth. | -| | | -+--------------+--------------------------------------------------------------+ diff --git a/docs/userguide/opnfv_yardstick_tc007.rst b/docs/userguide/opnfv_yardstick_tc007.rst deleted file mode 100644 index 87663f816..000000000 --- a/docs/userguide/opnfv_yardstick_tc007.rst +++ /dev/null @@ -1,162 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Intel Corporation and others. - -************************************* -Yardstick Test Case Description TC007 -************************************* - -.. _DPDKpktgen: https://github.com/Pktgen/Pktgen-DPDK/ -.. _rfc2544: https://www.ietf.org/rfc/rfc2544.txt - -+-----------------------------------------------------------------------------+ -|Network Performance | -| | -+--------------+--------------------------------------------------------------+ -|test case id | OPNFV_YARDSTICK_TC007_Virtual Traffic Classifier Data Plane | -| | Throughput Benchmarking Test in Presence of Noisy | -| | neighbours | -| | | -+--------------+--------------------------------------------------------------+ -|metric | Throughput | -| | | -+--------------+--------------------------------------------------------------+ -|test purpose | To measure the throughput supported by the virtual Traffic | -| | Classifier according to the RFC2544 methodology for a | -| | user-defined set of vTC deployment configurations in the | -| | presence of noisy neighbours. | -| | | -+--------------+--------------------------------------------------------------+ -|configuration | file: opnfv_yardstick_tc007.yaml | -| | | -| | packet_size: size of the packets to be used during the | -| | throughput calculation. | -| | Allowe values: [64, 128, 256, 512, 1024, 1280, 1518] | -| | | -| | vnic_type: type of VNIC to be used. | -| | Allowed values are: | -| | - normal: for default OvS port configuration | -| | - direct: for SR-IOV port configuration | -| | | -| | vtc_flavor: OpenStack flavor to be used for the vTC | -| | Default available values are: m1.small, m1.medium, | -| | and m1.large, but the user can create his/her own | -| | flavor and give it as input | -| | | -| | num_of_neighbours: Number of noisy neighbours (VMs) to be | -| | instantiated during the experiment. | -| | Allowed values: range (1, 10) | -| | | -| | amount_of_ram: RAM to be used by each neighbor. | -| | Allowed values: ['250M', '1G', '2G', '3G', '4G', '5G', | -| | '6G', '7G', '8G', '9G', '10G'] | -| | Deault value: 256M | -| | | -| | number_of_cores: Number of noisy neighbours (VMs) to be | -| | instantiated during the experiment. | -| | Allowed values: range (1, 10) | -| | Default value: 1 | -| | | -| | vlan_sender: vlan tag of the network on which the vTC will | -| | receive traffic (VLAN Network 1). | -| | Allowed values: range (1, 4096) | -| | | -| | vlan_receiver: vlan tag of the network on which the vTC | -| | will send traffic back to the packet generator | -| | (VLAN Network 2). | -| | Allowed values: range (1, 4096) | -| | | -| | default_net_name: neutron name of the defaul network that | -| | is used for access to the internet from the vTC | -| | (vNIC 1). | -| | | -| | default_subnet_name: subnet name for vNIC1 | -| | (information available through Neutron). | -| | | -| | vlan_net_1_name: Neutron Name for VLAN Network 1 | -| | (information available through Neutron). | -| | | -| | vlan_subnet_1_name: Subnet Neutron name for VLAN Network 1 | -| | (information available through Neutron). | -| | | -| | vlan_net_2_name: Neutron Name for VLAN Network 2 | -| | (information available through Neutron). | -| | | -| | vlan_subnet_2_name: Subnet Neutron name for VLAN Network 2 | -| | (information available through Neutron). | -| | | -+--------------+--------------------------------------------------------------+ -|test tool | DPDK pktgen | -| | | -| | DPDK Pktgen is not part of a Linux distribution, | -| | hence it needs to be installed by the user. | -| | | -+--------------+--------------------------------------------------------------+ -|references | DPDKpktgen_ | -| | | -| | ETSI-NFV-TST001 | -| | | -| | rfc2544_ | -| | | -+--------------+--------------------------------------------------------------+ -|applicability | Test can be configured with different flavors, vNIC type | -| | and packet sizes. Default values exist as specified above. | -| | The vNIC type and flavor MUST be specified by the user. | -| | | -+--------------+--------------------------------------------------------------+ -|pre-test | The vTC has been successfully instantiated and configured. | -| | The user has correctly assigned the values to the deployment | -| | configuration parameters. | -| | | -| | - Multicast traffic MUST be enabled on the network. | -| | The Data network switches need to be configured in | -| | order to manage multicast traffic. | -| | - In the case of SR-IOV vNICs use, SR-IOV compatible NICs | -| | must be used on the compute node. | -| | - Yarsdtick needs to be installed on a host connected to the | -| | data network and the host must have 2 DPDK-compatible | -| | NICs. Proper configuration of DPDK and DPDK pktgen is | -| | required before to run the test case. | -| | (For further instructions please refer to the ApexLake | -| | documentation). | -| | | -+--------------+--------------------------------------------------------------+ -|test sequence | Description and expected results | -| | | -+--------------+--------------------------------------------------------------+ -|step 1 | The noisy neighbours are deployed as required by the user. | -| | | -+--------------+--------------------------------------------------------------+ -|step 2 | The vTC is deployed, according to the configuration required | -| | by the user | -| | | -+--------------+--------------------------------------------------------------+ -|step 3 | The vTC is correctly deployed and configured as necessary. | -| | The initialization script has been correctly executed and | -| | the vTC is ready to receive and process the traffic. | -| | | -+--------------+--------------------------------------------------------------+ -|step 4 | Test case is executed with the parameters specified by the | -| | user: | -| | - vTC flavor | -| | - vNIC type | -| | - packet size | -| | The traffic is sent to the vTC using the maximum available | -| | traffic rate | -| | | -+--------------+--------------------------------------------------------------+ -|step 5 | The vTC instance forwards all the packets back to the | -| | packet generator for 60 seconds, as specified by RFC 2544. | -| | | -| | Steps 4 and 5 are executed different times with different | -| | with different traffic rates, in order to find the maximum | -| | supported traffic rate, accoring to the current definition | -| | of throughput in RFC 2544. | -| | | -+--------------+--------------------------------------------------------------+ -|test verdict | The result of the test is a number between 0 and 100 which | -| | represents the throughput in terms of percentage of the | -| | available pktgen NIC bandwidth. | -| | | -+--------------+--------------------------------------------------------------+ diff --git a/docs/userguide/opnfv_yardstick_tc008.rst b/docs/userguide/opnfv_yardstick_tc008.rst deleted file mode 100644 index a4ecaf6ae..000000000 --- a/docs/userguide/opnfv_yardstick_tc008.rst +++ /dev/null @@ -1,90 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Ericsson AB and others. - -************************************* -Yardstick Test Case Description TC008 -************************************* - -.. _pktgen: https://www.kernel.org/doc/Documentation/networking/pktgen.txt - -+-----------------------------------------------------------------------------+ -|Packet Loss Extended Test | -| | -+--------------+--------------------------------------------------------------+ -|test case id | OPNFV_YARDSTICK_TC008_NW PERF, Packet loss Extended Test | -| | | -+--------------+--------------------------------------------------------------+ -|metric | Number of flows, packet size and throughput | -| | | -+--------------+--------------------------------------------------------------+ -|test purpose | To evaluate the IaaS network performance with regards to | -| | flows and throughput, such as if and how different amounts | -| | of packet sizes and flows matter for the throughput between | -| | VMs on different compute blades. Typically e.g. the | -| | performance of a vSwitch | -| | depends on the number of flows running through it. Also | -| | performance of other equipment or entities can depend | -| | on the number of flows or the packet sizes used. | -| | The purpose is also to be able to spot trends. Test results, | -| | graphs ans similar shall be stored for comparison reasons and| -| | product evolution understanding between different OPNFV | -| | versions and/or configurations. | -| | | -+--------------+--------------------------------------------------------------+ -|configuration | file: opnfv_yardstick_tc008.yaml | -| | | -| | Packet size: 64, 128, 256, 512, 1024, 1280 and 1518 bytes. | -| | | -| | Number of ports: 1, 10, 50, 100, 500 and 1000. The amount of | -| | configured ports map from 2 up to 1001000 flows, | -| | respectively. Each packet_size/port_amount combination is run| -| | ten times, for 20 seconds each. Then the next | -| | packet_size/port_amount combination is run, and so on. | -| | | -| | The client and server are distributed on different HW. | -| | | -| | For SLA max_ppm is set to 1000. | -| | | -+--------------+--------------------------------------------------------------+ -|test tool | pktgen | -| | | -| | (Pktgen is not always part of a Linux distribution, hence it | -| | needs to be installed. It is part of the Yardstick Docker | -| | image. | -| | As an example see the /yardstick/tools/ directory for how | -| | to generate a Linux image with pktgen included.) | -| | | -+--------------+--------------------------------------------------------------+ -|references | pktgen_ | -| | | -| | ETSI-NFV-TST001 | -| | | -+--------------+--------------------------------------------------------------+ -|applicability | Test can be configured with different packet sizes, amount | -| | of flows and test duration. Default values exist. | -| | | -| | SLA (optional): max_ppm: The number of packets per million | -| | packets sent that are acceptable to loose, not received. | -| | | -+--------------+--------------------------------------------------------------+ -|pre-test | The test case image needs to be installed into Glance | -|conditions | with pktgen included in it. | -| | | -| | No POD specific requirements have been identified. | -| | | -+--------------+--------------------------------------------------------------+ -|test sequence | description and expected result | -| | | -+--------------+--------------------------------------------------------------+ -|step 1 | The hosts are installed, as server and client. pktgen is | -| | invoked and logs are produced and stored. | -| | | -| | Result: Logs are stored. | -| | | -+--------------+--------------------------------------------------------------+ -|test verdict | Fails only if SLA is not passed, or if there is a test case | -| | execution problem. | -| | | -+--------------+--------------------------------------------------------------+ diff --git a/docs/userguide/opnfv_yardstick_tc009.rst b/docs/userguide/opnfv_yardstick_tc009.rst deleted file mode 100644 index d6f445361..000000000 --- a/docs/userguide/opnfv_yardstick_tc009.rst +++ /dev/null @@ -1,89 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Ericsson AB and others. - -************************************* -Yardstick Test Case Description TC009 -************************************* - -.. _pktgen: https://www.kernel.org/doc/Documentation/networking/pktgen.txt - -+-----------------------------------------------------------------------------+ -|Packet Loss | -| | -+--------------+--------------------------------------------------------------+ -|test case id | OPNFV_YARDSTICK_TC009_NW PERF, Packet loss | -| | | -+--------------+--------------------------------------------------------------+ -|metric | Number of flows, packets lost and throughput | -| | | -+--------------+--------------------------------------------------------------+ -|test purpose | To evaluate the IaaS network performance with regards to | -| | flows and throughput, such as if and how different amounts | -| | of flows matter for the throughput between VMs on different | -| | compute blades. | -| | Typically e.g. the performance of a vSwitch | -| | depends on the number of flows running through it. Also | -| | performance of other equipment or entities can depend | -| | on the number of flows or the packet sizes used. | -| | The purpose is also to be able to spot trends. Test results, | -| | graphs ans similar shall be stored for comparison reasons and| -| | product evolution understanding between different OPNFV | -| | versions and/or configurations. | -| | | -+--------------+--------------------------------------------------------------+ -|configuration | file: opnfv_yardstick_tc009.yaml | -| | | -| | Packet size: 64 bytes | -| | | -| | Number of ports: 1, 10, 50, 100, 500 and 1000. The amount of | -| | configured ports map from 2 up to 1001000 flows, | -| | respectively. Each port amount is run ten times, for 20 | -| | seconds each. Then the next port_amount is run, and so on. | -| | | -| | The client and server are distributed on different HW. | -| | | -| | For SLA max_ppm is set to 1000. | -| | | -+--------------+--------------------------------------------------------------+ -|test tool | pktgen | -| | | -| | (Pktgen is not always part of a Linux distribution, hence it | -| | needs to be installed. It is part of the Yardstick Docker | -| | image. | -| | As an example see the /yardstick/tools/ directory for how | -| | to generate a Linux image with pktgen included.) | -| | | -+--------------+--------------------------------------------------------------+ -|references | pktgen_ | -| | | -| | ETSI-NFV-TST001 | -| | | -+--------------+--------------------------------------------------------------+ -|applicability | Test can be configured with different packet sizes, amount | -| | of flows and test duration. Default values exist. | -| | | -| | SLA (optional): max_ppm: The number of packets per million | -| | packets sent that are acceptable to loose, not received. | -| | | -+--------------+--------------------------------------------------------------+ -|pre-test | The test case image needs to be installed into Glance | -|conditions | with pktgen included in it. | -| | | -| | No POD specific requirements have been identified. | -| | | -+--------------+--------------------------------------------------------------+ -|test sequence | description and expected result | -| | | -+--------------+--------------------------------------------------------------+ -|step 1 | The hosts are installed, as server and client. pktgen is | -| | invoked and logs are produced and stored. | -| | | -| | Result: logs are stored. | -| | | -+--------------+--------------------------------------------------------------+ -|test verdict | Fails only if SLA is not passed, or if there is a test case | -| | execution problem. | -| | | -+--------------+--------------------------------------------------------------+ diff --git a/docs/userguide/opnfv_yardstick_tc010.rst b/docs/userguide/opnfv_yardstick_tc010.rst deleted file mode 100644 index 202307de6..000000000 --- a/docs/userguide/opnfv_yardstick_tc010.rst +++ /dev/null @@ -1,154 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Ericsson AB and others. - -************************************* -Yardstick Test Case Description TC010 -************************************* - -.. _lat_mem_rd: http://manpages.ubuntu.com/manpages/trusty/lat_mem_rd.8.html - -+-----------------------------------------------------------------------------+ -|Memory Latency | -| | -+--------------+--------------------------------------------------------------+ -|test case id | OPNFV_YARDSTICK_TC010_MEMORY LATENCY | -| | | -+--------------+--------------------------------------------------------------+ -|metric | Memory read latency (nanoseconds) | -| | | -+--------------+--------------------------------------------------------------+ -|test purpose | The purpose of TC010 is to evaluate the IaaS compute | -| | performance with regards to memory read latency. | -| | It measures the memory read latency for varying memory sizes | -| | and strides. Whole memory hierarchy is measured. | -| | | -| | The purpose is also to be able to spot the trends. | -| | Test results, graphs and similar shall be stored for | -| | comparison reasons and product evolution understanding | -| | between different OPNFV versions and/or configurations. | -| | | -+--------------+--------------------------------------------------------------+ -|test tool | Lmbench | -| | | -| | Lmbench is a suite of operating system microbenchmarks. This | -| | test uses lat_mem_rd tool from that suite including: | -| | * Context switching | -| | * Networking: connection establishment, pipe, TCP, UDP, and | -| | RPC hot potato | -| | * File system creates and deletes | -| | * Process creation | -| | * Signal handling | -| | * System call overhead | -| | * Memory read latency | -| | | -| | (LMbench is not always part of a Linux distribution, hence | -| | it needs to be installed. As an example see the | -| | /yardstick/tools/ directory for how to generate a Linux | -| | image with LMbench included.) | -| | | -+--------------+--------------------------------------------------------------+ -|test | LMbench lat_mem_rd benchmark measures memory read latency | -|description | for varying memory sizes and strides. | -| | | -| | The benchmark runs as two nested loops. The outer loop is | -| | the stride size. The inner loop is the array size. For each | -| | array size, the benchmark creates a ring of pointers that | -| | point backward one stride.Traversing the array is done by: | -| | | -| | p = (char **)*p; | -| | | -| | in a for loop (the over head of the for loop is not | -| | significant; the loop is an unrolled loop 100 loads long). | -| | The size of the array varies from 512 bytes to (typically) | -| | eight megabytes. For the small sizes, the cache will have an | -| | effect, and the loads will be much faster. This becomes much | -| | more apparent when the data is plotted. | -| | | -| | Only data accesses are measured; the instruction cache is | -| | not measured. | -| | | -| | The results are reported in nanoseconds per load and have | -| | been verified accurate to within a few nanoseconds on an SGI | -| | Indy. | -| | | -+--------------+--------------------------------------------------------------+ -|configuration | File: opnfv_yardstick_tc010.yaml | -| | | -| | * SLA (max_latency): 30 nanoseconds | -| | * Stride - 128 bytes | -| | * Stop size - 64 megabytes | -| | * Iterations: 10 - test is run 10 times iteratively. | -| | * Interval: 1 - there is 1 second delay between each | -| | iteration. | -| | | -| | SLA is optional. The SLA in this test case serves as an | -| | example. Considerably lower read latency is expected. | -| | However, to cover most configurations, both baremetal and | -| | fully virtualized ones, this value should be possible to | -| | achieve and acceptable for black box testing. | -| | Many heavy IO applications start to suffer badly if the | -| | read latency is higher than this. | -| | | -+--------------+--------------------------------------------------------------+ -|applicability | Test can be configured with different: | -| | | -| | * strides; | -| | * stop_size; | -| | * iterations and intervals. | -| | | -| | Default values exist. | -| | | -| | SLA (optional) : max_latency: The maximum memory latency | -| | that is accepted. | -| | | -+--------------+--------------------------------------------------------------+ -|usability | This test case is one of Yardstick's generic test. Thus it | -| | is runnable on most of the scenarios. | -| | | -+--------------+--------------------------------------------------------------+ -|references | LMbench lat_mem_rd_ | -| | | -| | ETSI-NFV-TST001 | -| | | -+--------------+--------------------------------------------------------------+ -|pre-test | The test case image needs to be installed into Glance | -|conditions | with Lmbench included in the image. | -| | | -| | No POD specific requirements have been identified. | -| | | -+--------------+--------------------------------------------------------------+ -|test sequence | description and expected result | -| | | -+--------------+--------------------------------------------------------------+ -|step 1 | The host is installed as client. LMbench's lat_mem_rd tool | -| | is invoked and logs are produced and stored. | -| | | -| | Result: logs are stored. | -| | | -+--------------+--------------------------------------------------------------+ -|step 1 | A host VM with LMbench installed is booted. | -| | | -+--------------+--------------------------------------------------------------+ -|step 2 | Yardstick is connected with the host VM by using ssh. | -| | 'lmbench_latency_benchmark' bash script is copyied from Jump | -| | Host to the host VM via the ssh tunnel. | -| | | -+--------------+--------------------------------------------------------------+ -|step 3 | 'lmbench_latency_benchmark' script is invoked. LMbench's | -| | lat_mem_rd benchmark starts to measures memory read latency | -| | for varying memory sizes and strides. Memory read latency | -| | are recorded and checked against the SLA. Logs are produced | -| | and stored. | -| | | -| | Result: Logs are stored. | -| | | -+--------------+--------------------------------------------------------------+ -|step 4 | The host VM is deleted. | -| | | -+--------------+--------------------------------------------------------------+ -|test verdict | Test fails if the measured memory latency is above the SLA | -| | value or if there is a test case execution problem. | -| | | -+--------------+--------------------------------------------------------------+ diff --git a/docs/userguide/opnfv_yardstick_tc011.rst b/docs/userguide/opnfv_yardstick_tc011.rst deleted file mode 100644 index 48bdef497..000000000 --- a/docs/userguide/opnfv_yardstick_tc011.rst +++ /dev/null @@ -1,123 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Huawei Technologies Co.,Ltd and others. - -************************************* -Yardstick Test Case Description TC011 -************************************* - -.. _iperf3: https://iperf.fr/ - -+-----------------------------------------------------------------------------+ -|Packet delay variation between VMs | -| | -+--------------+--------------------------------------------------------------+ -|test case id | OPNFV_YARDSTICK_TC011_PACKET DELAY VARIATION BETWEEN VMs | -| | | -+--------------+--------------------------------------------------------------+ -|metric | jitter: packet delay variation (ms) | -| | | -+--------------+--------------------------------------------------------------+ -|test purpose | The purpose of TC011 is to evaluate the IaaS network | -| | performance with regards to network jitter (packet delay | -| | variation). | -| | It measures the packet delay variation sending the packets | -| | from one VM to the other. | -| | | -| | The purpose is also to be able to spot the trends. | -| | Test results, graphs and similar shall be stored for | -| | comparison reasons and product evolution understanding | -| | between different OPNFV versions and/or configurations. | -| | | -+--------------+--------------------------------------------------------------+ -|test tool | iperf3 | -| | | -| | iPerf3 is a tool for active measurements of the maximum | -| | achievable bandwidth on IP networks. It supports tuning of | -| | various parameters related to timing, buffers and protocols. | -| | The UDP protocols can be used to measure jitter delay. | -| | | -| | (iperf3 is not always part of a Linux distribution, hence it | -| | needs to be installed. It is part of the Yardstick Docker | -| | image. As an example see the /yardstick/tools/ directory for | -| | how to generate a Linux image with pktgen included.) | -| | | -+--------------+--------------------------------------------------------------+ -|test | iperf3 test is invoked between a host VM and a target VM. | -|description | | -| | Jitter calculations are continuously computed by the server, | -| | as specified by RTP in RFC 1889. The client records a 64 bit | -| | second/microsecond timestamp in the packet. The server | -| | computes the relative transit time as (server's receive time | -| | - client's send time). The client's and server's clocks do | -| | not need to be synchronized; any difference is subtracted | -| | outin the jitter calculation. Jitter is the smoothed mean of | -| | differences between consecutive transit times. | -| | | -+--------------+--------------------------------------------------------------+ -|configuration | File: opnfv_yardstick_tc011.yaml | -| | | -| | * options: | -| | protocol: udp # The protocol used by iperf3 tools | -| | bandwidth: 20m # It will send the given number of packets | -| | without pausing | -| | * runner: | -| | duration: 30 # Total test duration 30 seconds. | -| | | -| | * SLA (optional): | -| | jitter: 10 (ms) # The maximum amount of jitter that is | -| | accepted. | -| | | -+--------------+--------------------------------------------------------------+ -|applicability | Test can be configured with different: | -| | | -| | * bandwidth: Test case can be configured with different | -| | bandwidth. | -| | | -| | * duration: The test duration can be configured. | -| | | -| | * jitter: SLA is optional. The SLA in this test case | -| | serves as an example. | -| | | -+--------------+--------------------------------------------------------------+ -|usability | This test case is one of Yardstick's generic test. Thus it | -| | is runnable on most of the scenarios. | -| | | -+--------------+--------------------------------------------------------------+ -|references | iperf3_ | -| | | -| | ETSI-NFV-TST001 | -| | | -+--------------+--------------------------------------------------------------+ -|pre-test | The test case image needs to be installed into Glance | -|conditions | with iperf3 included in the image. | -| | | -| | No POD specific requirements have been identified. | -| | | -+--------------+--------------------------------------------------------------+ -|test sequence | description and expected result | -| | | -+--------------+--------------------------------------------------------------+ -|step 1 | Two host VMs with iperf3 installed are booted, as server and | -| | client. | -| | | -+--------------+--------------------------------------------------------------+ -|step 2 | Yardstick is connected with the host VM by using ssh. | -| | A iperf3 server is started on the server VM via the ssh | -| | tunnel. | -| | | -+--------------+--------------------------------------------------------------+ -|step 3 | iperf3 benchmark is invoked. Jitter is calculated and check | -| | against the SLA. Logs are produced and stored. | -| | | -| | Result: Logs are stored. | -| | | -+--------------+--------------------------------------------------------------+ -|step 4 | The host VMs are deleted. | -| | | -+--------------+--------------------------------------------------------------+ -|test verdict | Test should not PASS if any jitter is above the optional SLA | -| | value, or if there is a test case execution problem. | -| | | -+--------------+--------------------------------------------------------------+ diff --git a/docs/userguide/opnfv_yardstick_tc012.rst b/docs/userguide/opnfv_yardstick_tc012.rst deleted file mode 100644 index b56e829f5..000000000 --- a/docs/userguide/opnfv_yardstick_tc012.rst +++ /dev/null @@ -1,135 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Ericsson AB and others. - -************************************* -Yardstick Test Case Description TC012 -************************************* - -.. _bw_mem: http://manpages.ubuntu.com/manpages/trusty/bw_mem.8.html - -+-----------------------------------------------------------------------------+ -|Memory Bandwidth | -| | -+--------------+--------------------------------------------------------------+ -|test case id | OPNFV_YARDSTICK_TC012_MEMORY BANDWIDTH | -| | | -+--------------+--------------------------------------------------------------+ -|metric | Memory read/write bandwidth (MBps) | -| | | -+--------------+--------------------------------------------------------------+ -|test purpose | The purpose of TC012 is to evaluate the IaaS compute | -| | performance with regards to memory throughput. | -| | It measures the rate at which data can be read from and | -| | written to the memory (this includes all levels of memory). | -| | | -| | The purpose is also to be able to spot the trends. | -| | Test results, graphs and similar shall be stored for | -| | comparison reasons and product evolution understanding | -| | between different OPNFV versions and/or configurations. | -| | | -+--------------+--------------------------------------------------------------+ -|test tool | LMbench | -| | | -| | LMbench is a suite of operating system microbenchmarks. | -| | This test uses bw_mem tool from that suite including: | -| | * Cached file read | -| | * Memory copy (bcopy) | -| | * Memory read | -| | * Memory write | -| | * Pipe | -| | * TCP | -| | | -| | (LMbench is not always part of a Linux distribution, hence | -| | it needs to be installed. As an example see the | -| | /yardstick/tools/ directory for how to generate a Linux | -| | image with LMbench included.) | -| | | -+--------------+--------------------------------------------------------------+ -|test | LMbench bw_mem benchmark allocates twice the specified | -|description | amount of memory, zeros it, and then times the copying of | -| | the first half to the second half. The benchmark is invoked | -| | in a host VM on a compute blade. Results are reported in | -| | megabytes moved per second. | -| | | -+--------------+--------------------------------------------------------------+ -|configuration | File: opnfv_yardstick_tc012.yaml | -| | | -| | * SLA (optional): 15000 (MBps) min_bw: The minimum amount of | -| | memory bandwidth that is accepted. | -| | * Size: 10 240 kB - test allocates twice that size | -| | (20 480kB) zeros it and then measures the time it takes to | -| | copy from one side to another. | -| | * Benchmark: rdwr - measures the time to read data into | -| | memory and then write data to the same location. | -| | * Warmup: 0 - the number of iterations to perform before | -| | taking actual measurements. | -| | * Iterations: 10 - test is run 10 times iteratively. | -| | * Interval: 1 - there is 1 second delay between each | -| | iteration. | -| | | -| | SLA is optional. The SLA in this test case serves as an | -| | example. Considerably higher bandwidth is expected. | -| | However, to cover most configurations, both baremetal and | -| | fully virtualized ones, this value should be possible to | -| | achieve and acceptable for black box testing. | -| | Many heavy IO applications start to suffer badly if the | -| | read/write bandwidths are lower than this. | -| | | -+--------------+--------------------------------------------------------------+ -|applicability | Test can be configured with different: | -| | | -| | * memory sizes; | -| | * memory operations (such as rd, wr, rdwr, cp, frd, fwr, | -| | fcp, bzero, bcopy); | -| | * number of warmup iterations; | -| | * iterations and intervals. | -| | | -| | Default values exist. | -| | | -| | SLA (optional) : min_bandwidth: The minimun memory bandwidth | -| | that is accepted. | -| | | -+--------------+--------------------------------------------------------------+ -|usability | This test case is one of Yardstick's generic test. Thus it | -| | is runnable on most of the scenarios. | -| | | -+--------------+--------------------------------------------------------------+ -|references | LMbench bw_mem_ | -| | | -| | ETSI-NFV-TST001 | -| | | -+--------------+--------------------------------------------------------------+ -|pre-test | The test case image needs to be installed into Glance | -|conditions | with Lmbench included in the image. | -| | | -| | No POD specific requirements have been identified. | -| | | -+--------------+--------------------------------------------------------------+ -|test sequence | description and expected result | -| | | -+--------------+--------------------------------------------------------------+ -|step 1 | A host VM with LMbench installed is booted. | -| | | -+--------------+--------------------------------------------------------------+ -|step 2 | Yardstick is connected with the host VM by using ssh. | -| | "lmbench_bandwidth_benchmark" bash script is copied from | -| | Jump Host to the host VM via ssh tunnel. | -| | | -+--------------+--------------------------------------------------------------+ -|step 3 | 'lmbench_bandwidth_benchmark' script is invoked. LMbench's | -| | bw_mem benchmark starts to measures memory read/write | -| | bandwidth. Memory read/write bandwidth results are recorded | -| | and checked against the SLA. Logs are produced and stored. | -| | | -| | Result: Logs are stored. | -| | | -+--------------+--------------------------------------------------------------+ -|step 4 | The host VM is deleted. | -| | | -+--------------+--------------------------------------------------------------+ -|test verdict | Test fails if the measured memory bandwidth is below the SLA | -| | value or if there is a test case execution problem. | -| | | -+--------------+--------------------------------------------------------------+ diff --git a/docs/userguide/opnfv_yardstick_tc014.rst b/docs/userguide/opnfv_yardstick_tc014.rst deleted file mode 100644 index 1b0d7831a..000000000 --- a/docs/userguide/opnfv_yardstick_tc014.rst +++ /dev/null @@ -1,126 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Huawei Technologies Co.,Ltd and others. - -************************************* -Yardstick Test Case Description TC014 -************************************* - -.. _unixbench: https://github.com/kdlucas/byte-unixbench/blob/master/UnixBench - -+-----------------------------------------------------------------------------+ -|Processing speed | -| | -+--------------+--------------------------------------------------------------+ -|test case id | OPNFV_YARDSTICK_TC014_PROCESSING SPEED | -| | | -+--------------+--------------------------------------------------------------+ -|metric | score of single cpu running, | -| | score of parallel running | -| | | -+--------------+--------------------------------------------------------------+ -|test purpose | The purpose of TC014 is to evaluate the IaaS compute | -| | performance with regards to CPU processing speed. | -| | It measures score of single cpu running and parallel | -| | running. | -| | | -| | The purpose is also to be able to spot the trends. | -| | Test results, graphs and similar shall be stored for | -| | comparison reasons and product evolution understanding | -| | between different OPNFV versions and/or configurations. | -| | | -+--------------+--------------------------------------------------------------+ -|test tool | UnixBench | -| | | -| | Unixbench is the most used CPU benchmarking software tool. | -| | It can measure the performance of bash scripts, CPUs in | -| | multithreading and single threading. It can also measure the | -| | performance for parallel taks. Also, specific disk IO for | -| | small and large files are performed. You can use it to | -| | measure either linux dedicated servers and linux vps | -| | servers, running CentOS, Debian, Ubuntu, Fedora and other | -| | distros. | -| | | -| | (UnixBench is not always part of a Linux distribution, hence | -| | it needs to be installed. As an example see the | -| | /yardstick/tools/ directory for how to generate a Linux | -| | image with UnixBench included.) | -| | | -+--------------+--------------------------------------------------------------+ -|test | The UnixBench runs system benchmarks in a host VM on a | -|description | compute blade, getting information on the CPUs in the | -| | system. If the system has more than one CPU, the tests will | -| | be run twice -- once with a single copy of each test running | -| | at once, and once with N copies, where N is the number of | -| | CPUs. | -| | | -| | UnixBench will processs a set of results from a single test | -| | by averaging the individal pass results into a single final | -| | value. | -| | | -+--------------+--------------------------------------------------------------+ -|configuration | file: opnfv_yardstick_tc014.yaml | -| | | -| | run_mode: Run unixbench in quiet mode or verbose mode | -| | test_type: dhry2reg, whetstone and so on | -| | | -| | For SLA with single_score and parallel_score, both can be | -| | set by user, default is NA. | -| | | -+--------------+--------------------------------------------------------------+ -|applicability | Test can be configured with different: | -| | | -| | * test types; | -| | * dhry2reg; | -| | * whetstone. | -| | | -| | Default values exist. | -| | | -| | SLA (optional) : min_score: The minimun UnixBench score that | -| | is accepted. | -| | | -+--------------+--------------------------------------------------------------+ -|usability | This test case is one of Yardstick's generic test. Thus it | -| | is runnable on most of the scenarios. | -| | | -+--------------+--------------------------------------------------------------+ -|references | unixbench_ | -| | | -| | ETSI-NFV-TST001 | -| | | -+--------------+--------------------------------------------------------------+ -|pre-test | The test case image needs to be installed into Glance | -|conditions | with unixbench included in it. | -| | | -| | No POD specific requirements have been identified. | -| | | -+--------------+--------------------------------------------------------------+ -|test sequence | description and expected result | -| | | -+--------------+--------------------------------------------------------------+ -|step 1 | A host VM with UnixBench installed is booted. | -| | | -+--------------+--------------------------------------------------------------+ -|step 2 | Yardstick is connected with the host VM by using ssh. | -| | "unixbench_benchmark" bash script is copied from Jump Host | -| | to the host VM via ssh tunnel. | -| | | -+--------------+--------------------------------------------------------------+ -|step 3 | UnixBench is invoked. All the tests are executed using the | -| | "Run" script in the top-level of UnixBench directory. | -| | The "Run" script will run a standard "index" test, and save | -| | the report in the "results" directory. Then the report is | -| | processed by "unixbench_benchmark" and checked againsted the | -| | SLA. | -| | | -| | Result: Logs are stored. | -| | | -+--------------+--------------------------------------------------------------+ -|step 4 | The host VM is deleted. | -| | | -+--------------+--------------------------------------------------------------+ -|test verdict | Fails only if SLA is not passed, or if there is a test case | -| | execution problem. | -| | | -+--------------+--------------------------------------------------------------+ diff --git a/docs/userguide/opnfv_yardstick_tc019.rst b/docs/userguide/opnfv_yardstick_tc019.rst deleted file mode 100644 index 1af502253..000000000 --- a/docs/userguide/opnfv_yardstick_tc019.rst +++ /dev/null @@ -1,134 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Huawei Technologies Co.,Ltd and others. - -************************************* -Yardstick Test Case Description TC019 -************************************* - -+-----------------------------------------------------------------------------+ -|Control Node Openstack Service High Availability | -| | -+--------------+--------------------------------------------------------------+ -|test case id | OPNFV_YARDSTICK_TC019_HA: Control node Openstack service down| -| | | -+--------------+--------------------------------------------------------------+ -|test purpose | This test case will verify the high availability of the | -| | service provided by OpenStack (like nova-api, neutro-server) | -| | on control node. | -| | | -+--------------+--------------------------------------------------------------+ -|test method | This test case kills the processes of a specific Openstack | -| | service on a selected control node, then checks whether the | -| | request of the related Openstack command is OK and the killed| -| | processes are recovered. | -| | | -+--------------+--------------------------------------------------------------+ -|attackers | In this test case, an attacker called "kill-process" is | -| | needed. This attacker includes three parameters: | -| | 1) fault_type: which is used for finding the attacker's | -| | scripts. It should be always set to "kill-process" in this | -| | test case. | -| | 2) process_name: which is the process name of the specified | -| | OpenStack service. If there are multiple processes use the | -| | same name on the host, all of them are killed by this | -| | attacker. | -| | 3) host: which is the name of a control node being attacked. | -| | | -| | e.g. | -| | -fault_type: "kill-process" | -| | -process_name: "nova-api" | -| | -host: node1 | -| | | -+--------------+--------------------------------------------------------------+ -|monitors | In this test case, two kinds of monitor are needed: | -| | 1. the "openstack-cmd" monitor constantly request a specific | -| | Openstack command, which needs two parameters: | -| | 1) monitor_type: which is used for finding the monitor class | -| | and related scritps. It should be always set to | -| | "openstack-cmd" for this monitor. | -| | 2) command_name: which is the command name used for request | -| | | -| | 2. the "process" monitor check whether a process is running | -| | on a specific node, which needs three parameters: | -| | 1) monitor_type: which used for finding the monitor class and| -| | related scritps. It should be always set to "process" | -| | for this monitor. | -| | 2) process_name: which is the process name for monitor | -| | 3) host: which is the name of the node runing the process | -| | | -| | e.g. | -| | monitor1: | -| | -monitor_type: "openstack-cmd" | -| | -command_name: "nova image-list" | -| | monitor2: | -| | -monitor_type: "process" | -| | -process_name: "nova-api" | -| | -host: node1 | -| | | -+--------------+--------------------------------------------------------------+ -|metrics | In this test case, there are two metrics: | -| | 1)service_outage_time: which indicates the maximum outage | -| | time (seconds) of the specified Openstack command request. | -| | 2)process_recover_time: which indicates the maximun time | -| | (seconds) from the process being killed to recovered | -| | | -+--------------+--------------------------------------------------------------+ -|test tool | Developed by the project. Please see folder: | -| | "yardstick/benchmark/scenarios/availability/ha_tools" | -| | | -+--------------+--------------------------------------------------------------+ -|references | ETSI NFV REL001 | -| | | -+--------------+--------------------------------------------------------------+ -|configuration | This test case needs two configuration files: | -| | 1) test case file: opnfv_yardstick_tc019.yaml | -| | -Attackers: see above "attackers" discription | -| | -waiting_time: which is the time (seconds) from the process | -| | being killed to stoping monitors the monitors | -| | -Monitors: see above "monitors" discription | -| | -SLA: see above "metrics" discription | -| | | -| | 2)POD file: pod.yaml | -| | The POD configuration should record on pod.yaml first. | -| | the "host" item in this test case will use the node name in | -| | the pod.yaml. | -| | | -+--------------+--------------------------------------------------------------+ -|test sequence | description and expected result | -| | | -+--------------+--------------------------------------------------------------+ -|step 1 | start monitors: | -| | each monitor will run with independently process | -| | | -| | Result: The monitor info will be collected. | -| | | -+--------------+--------------------------------------------------------------+ -|step 2 | do attacker: connect the host through SSH, and then execute | -| | the kill process script with param value specified by | -| | "process_name" | -| | | -| | Result: Process will be killed. | -| | | -+--------------+--------------------------------------------------------------+ -|step 3 | stop monitors after a period of time specified by | -| | "waiting_time" | -| | | -| | Result: The monitor info will be aggregated. | -| | | -+--------------+--------------------------------------------------------------+ -|step 4 | verify the SLA | -| | | -| | Result: The test case is passed or not. | -| | | -+--------------+--------------------------------------------------------------+ -|post-action | It is the action when the test cases exist. It will check the| -| | status of the specified process on the host, and restart the | -| | process if it is not running for next test cases | -| | | -+--------------+--------------------------------------------------------------+ -|test verdict | Fails only if SLA is not passed, or if there is a test case | -| | execution problem. | -| | | -+--------------+--------------------------------------------------------------+ diff --git a/docs/userguide/opnfv_yardstick_tc020.rst b/docs/userguide/opnfv_yardstick_tc020.rst deleted file mode 100644 index f2f1d408b..000000000 --- a/docs/userguide/opnfv_yardstick_tc020.rst +++ /dev/null @@ -1,141 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Intel Corporation and others. - -************************************* -Yardstick Test Case Description TC020 -************************************* - -.. _DPDKpktgen: https://github.com/Pktgen/Pktgen-DPDK/ -.. _rfc2544: https://www.ietf.org/rfc/rfc2544.txt - -+-----------------------------------------------------------------------------+ -|Network Performance | -| | -+--------------+--------------------------------------------------------------+ -|test case id | OPNFV_YARDSTICK_TC0020_Virtual Traffic Classifier | -| | Instantiation Test | -| | | -+--------------+--------------------------------------------------------------+ -|metric | Failure | -| | | -+--------------+--------------------------------------------------------------+ -|test purpose | To verify that a newly instantiated vTC is 'alive' and | -| | functional and its instantiation is correctly supported by | -| | the infrastructure. | -| | | -+--------------+--------------------------------------------------------------+ -|configuration | file: opnfv_yardstick_tc020.yaml | -| | | -| | vnic_type: type of VNIC to be used. | -| | Allowed values are: | -| | - normal: for default OvS port configuration | -| | - direct: for SR-IOV port configuration | -| | Default value: None | -| | | -| | vtc_flavor: OpenStack flavor to be used for the vTC | -| | Default available values are: m1.small, m1.medium, | -| | and m1.large, but the user can create his/her own | -| | flavor and give it as input | -| | Default value: None | -| | | -| | vlan_sender: vlan tag of the network on which the vTC will | -| | receive traffic (VLAN Network 1). | -| | Allowed values: range (1, 4096) | -| | | -| | vlan_receiver: vlan tag of the network on which the vTC | -| | will send traffic back to the packet generator | -| | (VLAN Network 2). | -| | Allowed values: range (1, 4096) | -| | | -| | default_net_name: neutron name of the defaul network that | -| | is used for access to the internet from the vTC | -| | (vNIC 1). | -| | | -| | default_subnet_name: subnet name for vNIC1 | -| | (information available through Neutron). | -| | | -| | vlan_net_1_name: Neutron Name for VLAN Network 1 | -| | (information available through Neutron). | -| | | -| | vlan_subnet_1_name: Subnet Neutron name for VLAN Network 1 | -| | (information available through Neutron). | -| | | -| | vlan_net_2_name: Neutron Name for VLAN Network 2 | -| | (information available through Neutron). | -| | | -| | vlan_subnet_2_name: Subnet Neutron name for VLAN Network 2 | -| | (information available through Neutron). | -| | | -+--------------+--------------------------------------------------------------+ -|test tool | DPDK pktgen | -| | | -| | DPDK Pktgen is not part of a Linux distribution, | -| | hence it needs to be installed by the user. | -| | | -+--------------+--------------------------------------------------------------+ -|references | DPDKpktgen_ | -| | | -| | ETSI-NFV-TST001 | -| | | -| | rfc2544_ | -| | | -+--------------+--------------------------------------------------------------+ -|applicability | Test can be configured with different flavors, vNIC type | -| | and packet sizes. Default values exist as specified above. | -| | The vNIC type and flavor MUST be specified by the user. | -| | | -+--------------+--------------------------------------------------------------+ -|pre-test | The vTC has been successfully instantiated and configured. | -| | The user has correctly assigned the values to the deployment | -| | configuration parameters. | -| | | -| | - Multicast traffic MUST be enabled on the network. | -| | The Data network switches need to be configured in | -| | order to manage multicast traffic. | -| | Installation and configuration of smcroute is required | -| | before to run the test case. | -| | (For further instructions please refer to the ApexLake | -| | documentation). | -| | - In the case of SR-IOV vNICs use, SR-IOV compatible NICs | -| | must be used on the compute node. | -| | - Yarsdtick needs to be installed on a host connected to the | -| | data network and the host must have 2 DPDK-compatible | -| | NICs. Proper configuration of DPDK and DPDK pktgen is | -| | required before to run the test case. | -| | (For further instructions please refer to the ApexLake | -| | documentation). | -| | | -+--------------+--------------------------------------------------------------+ -|test sequence | Description and expected results | -| | | -+--------------+--------------------------------------------------------------+ -|step 1 | The vTC is deployed, according to the configuration provided | -| | by the user. | -| | | -+--------------+--------------------------------------------------------------+ -|step 2 | The vTC is correctly deployed and configured as necessary. | -| | The initialization script has been correctly executed and | -| | the vTC is ready to receive and process the traffic. | -| | | -+--------------+--------------------------------------------------------------+ -|step 3 | Test case is executed with the parameters specified by the | -| | the user: | -| | - vTC flavor | -| | - vNIC type | -| | A constant rate traffic is sent to the vTC for 10 seconds. | -| | | -+--------------+--------------------------------------------------------------+ -|step 4 | The vTC instance tags all the packets and sends them back to | -| | the packet generator for 10 seconds. | -| | | -| | The framework checks that the packet generator receives | -| | back all the packets with the correct tag from the vTC. | -| | | -+--------------+--------------------------------------------------------------+ -|test verdict | The vTC is deemed to be successfully instantiated if all | -| | packets are sent back with the right tag as requested, | -| | else it is deemed DoA (Dead on arrival) | -| | | -+--------------+--------------------------------------------------------------+ diff --git a/docs/userguide/opnfv_yardstick_tc021.rst b/docs/userguide/opnfv_yardstick_tc021.rst deleted file mode 100644 index c7adc870a..000000000 --- a/docs/userguide/opnfv_yardstick_tc021.rst +++ /dev/null @@ -1,157 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Intel Corporation and others. - -************************************* -Yardstick Test Case Description TC021 -************************************* - -.. _DPDKpktgen: https://github.com/Pktgen/Pktgen-DPDK/ -.. _rfc2544: https://www.ietf.org/rfc/rfc2544.txt - -+-----------------------------------------------------------------------------+ -|Network Performance | -| | -+--------------+--------------------------------------------------------------+ -|test case id | OPNFV_YARDSTICK_TC0021_Virtual Traffic Classifier | -| | Instantiation Test in Presence of Noisy Neighbours | -| | | -+--------------+--------------------------------------------------------------+ -|metric | Failure | -| | | -+--------------+--------------------------------------------------------------+ -|test purpose | To verify that a newly instantiated vTC is 'alive' and | -| | functional and its instantiation is correctly supported by | -| | the infrastructure in the presence of noisy neighbours. | -| | | -+--------------+--------------------------------------------------------------+ -|configuration | file: opnfv_yardstick_tc021.yaml | -| | | -| | vnic_type: type of VNIC to be used. | -| | Allowed values are: | -| | - normal: for default OvS port configuration | -| | - direct: for SR-IOV port configuration | -| | Default value: None | -| | | -| | vtc_flavor: OpenStack flavor to be used for the vTC | -| | Default available values are: m1.small, m1.medium, | -| | and m1.large, but the user can create his/her own | -| | flavor and give it as input | -| | Default value: None | -| | | -| | num_of_neighbours: Number of noisy neighbours (VMs) to be | -| | instantiated during the experiment. | -| | Allowed values: range (1, 10) | -| | | -| | amount_of_ram: RAM to be used by each neighbor. | -| | Allowed values: ['250M', '1G', '2G', '3G', '4G', '5G', | -| | '6G', '7G', '8G', '9G', '10G'] | -| | Deault value: 256M | -| | | -| | number_of_cores: Number of noisy neighbours (VMs) to be | -| | instantiated during the experiment. | -| | Allowed values: range (1, 10) | -| | Default value: 1 | -| | | -| | vlan_sender: vlan tag of the network on which the vTC will | -| | receive traffic (VLAN Network 1). | -| | Allowed values: range (1, 4096) | -| | | -| | vlan_receiver: vlan tag of the network on which the vTC | -| | will send traffic back to the packet generator | -| | (VLAN Network 2). | -| | Allowed values: range (1, 4096) | -| | | -| | default_net_name: neutron name of the defaul network that | -| | is used for access to the internet from the vTC | -| | (vNIC 1). | -| | | -| | default_subnet_name: subnet name for vNIC1 | -| | (information available through Neutron). | -| | | -| | vlan_net_1_name: Neutron Name for VLAN Network 1 | -| | (information available through Neutron). | -| | | -| | vlan_subnet_1_name: Subnet Neutron name for VLAN Network 1 | -| | (information available through Neutron). | -| | | -| | vlan_net_2_name: Neutron Name for VLAN Network 2 | -| | (information available through Neutron). | -| | | -| | vlan_subnet_2_name: Subnet Neutron name for VLAN Network 2 | -| | (information available through Neutron). | -| | | -+--------------+--------------------------------------------------------------+ -|test tool | DPDK pktgen | -| | | -| | DPDK Pktgen is not part of a Linux distribution, | -| | hence it needs to be installed by the user. | -| | | -+--------------+--------------------------------------------------------------+ -|references | DPDK Pktgen: DPDK Pktgen: DPDKpktgen_ | -| | | -| | ETSI-NFV-TST001 | -| | | -| | RFC 2544: rfc2544_ | -| | | -+--------------+--------------------------------------------------------------+ -|applicability | Test can be configured with different flavors, vNIC type | -| | and packet sizes. Default values exist as specified above. | -| | The vNIC type and flavor MUST be specified by the user. | -| | | -+--------------+--------------------------------------------------------------+ -|pre-test | The vTC has been successfully instantiated and configured. | -| | The user has correctly assigned the values to the deployment | -| | configuration parameters. | -| | | -| | - Multicast traffic MUST be enabled on the network. | -| | The Data network switches need to be configured in | -| | order to manage multicast traffic. | -| | Installation and configuration of smcroute is required | -| | before to run the test case. | -| | (For further instructions please refer to the ApexLake | -| | documentation). | -| | - In the case of SR-IOV vNICs use, SR-IOV compatible NICs | -| | must be used on the compute node. | -| | - Yarsdtick needs to be installed on a host connected to the | -| | data network and the host must have 2 DPDK-compatible | -| | NICs. Proper configuration of DPDK and DPDK pktgen is | -| | required before to run the test case. | -| | (For further instructions please refer to the ApexLake | -| | documentation). | -| | | -+--------------+--------------------------------------------------------------+ -|test sequence | Description and expected results | -| | | -+--------------+--------------------------------------------------------------+ -|step 1 | The noisy neighbours are deployed as required by the user. | -| | | -+--------------+--------------------------------------------------------------+ -|step 2 | The vTC is deployed, according to the configuration provided | -| | by the user. | -| | | -+--------------+--------------------------------------------------------------+ -|step 3 | The vTC is correctly deployed and configured as necessary. | -| | The initialization script has been correctly executed and | -| | the vTC is ready to receive and process the traffic. | -| | | -+--------------+--------------------------------------------------------------+ -|step 4 | Test case is executed with the selected parameters: | -| | - vTC flavor | -| | - vNIC type | -| | A constant rate traffic is sent to the vTC for 10 seconds. | -| | | -+--------------+--------------------------------------------------------------+ -|step 5 | The vTC instance tags all the packets and sends them back to | -| | the packet generator for 10 seconds. | -| | | -| | The framework checks if the packet generator receives back | -| | all the packets with the correct tag from the vTC. | -| | | -+--------------+--------------------------------------------------------------+ -|test verdict | The vTC is deemed to be successfully instantiated if all | -| | packets are sent back with the right tag as requested, | -| | else it is deemed DoA (Dead on arrival) | -| | | -+--------------+--------------------------------------------------------------+ diff --git a/docs/userguide/opnfv_yardstick_tc024.rst b/docs/userguide/opnfv_yardstick_tc024.rst deleted file mode 100644 index 8d15e8d2f..000000000 --- a/docs/userguide/opnfv_yardstick_tc024.rst +++ /dev/null @@ -1,76 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Ericsson AB and others. - -************************************* -Yardstick Test Case Description TC024 -************************************* - -.. _man-pages: http://manpages.ubuntu.com/manpages/trusty/man1/mpstat.1.html - -+-----------------------------------------------------------------------------+ -| CPU Load | -| | -+--------------+--------------------------------------------------------------+ -|test case id | OPNFV_YARDSTICK_TC024_CPU Load | -| | | -+--------------+--------------------------------------------------------------+ -|metric | CPU load | -| | | -+--------------+--------------------------------------------------------------+ -|test purpose | To evaluate the CPU load performance of the IaaS. This test | -| | case should be run in parallel to other Yardstick test cases | -| | and not run as a stand-alone test case. | -| | Average, minimum and maximun values are obtained. | -| | The purpose is also to be able to spot trends. | -| | Test results, graphs and similar shall be stored for | -| | comparison reasons and product evolution understanding | -| | between different OPNFV versions and/or configurations. | -| | | -+--------------+--------------------------------------------------------------+ -|configuration | file: cpuload.yaml (in the 'samples' directory) | -| | | -| | * interval: 1 - repeat, pausing every 1 seconds in-between. | -| | * count: 10 - display statistics 10 times, then exit. | -| | | -+--------------+--------------------------------------------------------------+ -|test tool | mpstat | -| | | -| | (mpstat is not always part of a Linux distribution, hence it | -| | needs to be installed. It is part of the Yardstick Glance | -| | image. However, if mpstat is not present the TC instead uses | -| | /proc/stats as source to produce "mpstat" output. | -| | | -+--------------+--------------------------------------------------------------+ -|references | man-pages_ | -| | | -+--------------+--------------------------------------------------------------+ -|applicability | Test can be configured with different: | -| | | -| | * interval; | -| | * count; | -| | * runner Iteration and intervals. | -| | | -| | There are default values for each above-mentioned option. | -| | Run in background with other test cases. | -| | | -+--------------+--------------------------------------------------------------+ -|pre-test | The test case image needs to be installed into Glance | -|conditions | with mpstat included in it. | -| | | -| | No POD specific requirements have been identified. | -| | | -+--------------+--------------------------------------------------------------+ -|test sequence | description and expected result | -| | | -+--------------+--------------------------------------------------------------+ -|step 1 | The host is installed. The related TC, or TCs, is | -| | invoked and mpstat logs are produced and stored. | -| | | -| | Result: Stored logs | -| | | -+--------------+--------------------------------------------------------------+ -|test verdict | None. CPU load results are fetched and stored. | -| | | -+--------------+--------------------------------------------------------------+ diff --git a/docs/userguide/opnfv_yardstick_tc025.rst b/docs/userguide/opnfv_yardstick_tc025.rst deleted file mode 100644 index 0e2e9a5f8..000000000 --- a/docs/userguide/opnfv_yardstick_tc025.rst +++ /dev/null @@ -1,123 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Huawei Technologies Co.,Ltd and others. - -************************************* -Yardstick Test Case Description TC025 -************************************* - -+-----------------------------------------------------------------------------+ -|OpenStack Controller Node abnormally shutdown High Availability | -| | -+--------------+--------------------------------------------------------------+ -|test case id | OPNFV_YARDSTICK_TC025_HA: OpenStack Controller Node | -| | abnormally shutdown | -| | | -+--------------+--------------------------------------------------------------+ -|test purpose | This test case will verify the high availability of | -| | controller node. When one of the controller node abnormally | -| | shutdown, the service provided by it should be OK. | -| | | -+--------------+--------------------------------------------------------------+ -|test method | This test case shutdowns a specified controller node with | -| | some fault injection tools, then checks whether all services | -| | provided by the controller node are OK with some monitor | -| | tools. | -| | | -+--------------+--------------------------------------------------------------+ -|attackers | In this test case, an attacker called "host-shutdown" is | -| | needed. This attacker includes two parameters: | -| | 1) fault_type: which is used for finding the attacker's | -| | scripts. It should be always set to "host-shutdown" in | -| | this test case. | -| | 2) host: the name of a controller node being attacked. | -| | | -| | e.g. | -| | -fault_type: "host-shutdown" | -| | -host: node1 | -| | | -+--------------+--------------------------------------------------------------+ -|monitors | In this test case, one kind of monitor are needed: | -| | 1. the "openstack-cmd" monitor constantly request a specific | -| | Openstack command, which needs two parameters | -| | 1) monitor_type: which is used for finding the monitor class | -| | and related scritps. It should be always set to | -| | "openstack-cmd" for this monitor. | -| | 2) command_name: which is the command name used for request | -| | | -| | There are four instance of the "openstack-cmd" monitor: | -| | monitor1: | -| | -monitor_type: "openstack-cmd" | -| | -api_name: "nova image-list" | -| | monitor2: | -| | -monitor_type: "openstack-cmd" | -| | -api_name: "neutron router-list" | -| | monitor3: | -| | -monitor_type: "openstack-cmd" | -| | -api_name: "heat stack-list" | -| | monitor4: | -| | -monitor_type: "openstack-cmd" | -| | -api_name: "cinder list" | -| | | -+--------------+--------------------------------------------------------------+ -|metrics | In this test case, there is one metric: | -| | 1)service_outage_time: which indicates the maximum outage | -| | time (seconds) of the specified Openstack command request. | -| | | -+--------------+--------------------------------------------------------------+ -|test tool | Developed by the project. Please see folder: | -| | "yardstick/benchmark/scenarios/availability/ha_tools" | -| | | -+--------------+--------------------------------------------------------------+ -|references | ETSI NFV REL001 | -| | | -+--------------+--------------------------------------------------------------+ -|configuration | This test case needs two configuration files: | -| | 1) test case file: opnfv_yardstick_tc019.yaml | -| | -Attackers: see above "attackers" discription | -| | -waiting_time: which is the time (seconds) from the process | -| | being killed to stoping monitors the monitors | -| | -Monitors: see above "monitors" discription | -| | -SLA: see above "metrics" discription | -| | | -| | 2)POD file: pod.yaml | -| | The POD configuration should record on pod.yaml first. | -| | the "host" item in this test case will use the node name in | -| | the pod.yaml. | -| | | -+--------------+--------------------------------------------------------------+ -|test sequence | description and expected result | -| | | -+--------------+--------------------------------------------------------------+ -|step 1 | start monitors: | -| | each monitor will run with independently process | -| | | -| | Result: The monitor info will be collected. | -| | | -+--------------+--------------------------------------------------------------+ -|step 2 | do attacker: connect the host through SSH, and then execute | -| | shutdown script on the host | -| | | -| | Result: The host will be shutdown. | -| | | -+--------------+--------------------------------------------------------------+ -|step 3 | stop monitors after a period of time specified by | -| | "waiting_time" | -| | | -| | Result: All monitor result will be aggregated. | -| | | -+--------------+--------------------------------------------------------------+ -|step 4 | verify the SLA | -| | | -| | Result: The test case is passed or not. | -| | | -+--------------+--------------------------------------------------------------+ -|post-action | It is the action when the test cases exist. It restarts the | -| | specified controller node if it is not restarted. | -| | | -+--------------+--------------------------------------------------------------+ -|test verdict | Fails only if SLA is not passed, or if there is a test case | -| | execution problem. | -| | | -+--------------+--------------------------------------------------------------+ diff --git a/docs/userguide/opnfv_yardstick_tc027.rst b/docs/userguide/opnfv_yardstick_tc027.rst deleted file mode 100644 index 125fd59fa..000000000 --- a/docs/userguide/opnfv_yardstick_tc027.rst +++ /dev/null @@ -1,95 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Huawei Technologies Co.,Ltd and others. - -************************************* -Yardstick Test Case Description TC027 -************************************* - -.. _ipv6: https://wiki.opnfv.org/ipv6_opnfv_project - -+-----------------------------------------------------------------------------+ -|IPv6 connectivity between nodes on the tenant network | -| | -+--------------+--------------------------------------------------------------+ -|test case id | OPNFV_YARDSTICK_TC027_IPv6 connectivity | -| | | -+--------------+--------------------------------------------------------------+ -|metric | RTT, Round Trip Time | -| | | -+--------------+--------------------------------------------------------------+ -|test purpose | To do a basic verification that IPv6 connectivity is within | -| | acceptable boundaries when ipv6 packets travel between hosts | -| | located on same or different compute blades. | -| | The purpose is also to be able to spot trends. Test results, | -| | graphs and similar shall be stored for comparison reasons and| -| | product evolution understanding between different OPNFV | -| | versions and/or configurations. | -| | | -+--------------+--------------------------------------------------------------+ -|configuration | file: opnfv_yardstick_tc027.yaml | -| | | -| | Packet size 56 bytes. | -| | SLA RTT is set to maximum 30 ms. | -| | ipv6 test case can be configured as three independent modules| -| | (setup, run, teardown). if you only want to setup ipv6 | -| | testing environment, do some tests as you want, "run_step" | -| | of task yaml file should be configured as "setup". if you | -| | want to setup and run ping6 testing automatically, "run_step"| -| | should be configured as "setup, run". and if you have had a | -| | environment which has been setup, you only wan to verify the | -| | connectivity of ipv6 network, "run_step" should be "run". Of | -| | course, default is that three modules run sequentially. | -| | | -+--------------+--------------------------------------------------------------+ -|test tool | ping6 | -| | | -| | Ping6 is normally part of Linux distribution, hence it | -| | doesn't need to be installed. | -| | | -+--------------+--------------------------------------------------------------+ -|references | ipv6_ | -| | | -| | ETSI-NFV-TST001 | -| | | -+--------------+--------------------------------------------------------------+ -|applicability | Test case can be configured with different run step | -| | you can run setup, run benchmark, teardown independently | -| | SLA is optional. The SLA in this test case serves as an | -| | example. Considerably lower RTT is expected. | -| | | -+--------------+--------------------------------------------------------------+ -|pre-test | The test case image needs to be installed into Glance | -|conditions | with ping6 included in it. | -| | | -| | For Brahmaputra, a compass_os_nosdn_ha deploy scenario is | -| | need. more installer and more sdn deploy scenario will be | -| | supported soon | -| | | -+--------------+--------------------------------------------------------------+ -|test sequence | description and expected result | -| | | -+--------------+--------------------------------------------------------------+ -|step 1 | To setup IPV6 testing environment: | -| | 1. disable security group | -| | 2. create (ipv6, ipv4) router, network and subnet | -| | 3. create VRouter, VM1, VM2 | -| | | -+--------------+--------------------------------------------------------------+ -|step 2 | To run ping6 to verify IPV6 connectivity : | -| | 1. ssh to VM1 | -| | 2. Ping6 to ipv6 router from VM1 | -| | 3. Get the result(RTT) and logs are stored | -| | | -+--------------+--------------------------------------------------------------+ -|step 3 | To teardown IPV6 testing environment | -| | 1. delete VRouter, VM1, VM2 | -| | 2. delete (ipv6, ipv4) router, network and subnet | -| | 3. enable security group | -| | | -+--------------+--------------------------------------------------------------+ -|test verdict | Test should not PASS if any RTT is above the optional SLA | -| | value, or if there is a test case execution problem. | -| | | -+--------------+--------------------------------------------------------------+ diff --git a/docs/userguide/opnfv_yardstick_tc028.rst b/docs/userguide/opnfv_yardstick_tc028.rst deleted file mode 100644 index 24206f33f..000000000 --- a/docs/userguide/opnfv_yardstick_tc028.rst +++ /dev/null @@ -1,70 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Huawei Technologies Co., Ltd and others. - -************************************* -Yardstick Test Case Description TC028 -************************************* - -.. _Cyclictest: https://rt.wiki.kernel.org/index.php/Cyclictest - -+-----------------------------------------------------------------------------+ -|KVM Latency measurements | -| | -+--------------+--------------------------------------------------------------+ -|test case id | OPNFV_YARDSTICK_TC028_KVM Latency measurements | -| | | -+--------------+--------------------------------------------------------------+ -|metric | min, avg and max latency | -| | | -+--------------+--------------------------------------------------------------+ -|test purpose | To evaluate the IaaS KVM virtualization capability with | -| | regards to min, avg and max latency. | -| | The purpose is also to be able to spot trends. Test results, | -| | graphs and similar shall be stored for comparison reasons | -| | and product evolution understanding between different OPNFV | -| | versions and/or configurations. | -| | | -+--------------+--------------------------------------------------------------+ -|configuration | file: samples/cyclictest-node-context.yaml | -| | | -+--------------+--------------------------------------------------------------+ -|test tool | Cyclictest | -| | | -| | (Cyclictest is not always part of a Linux distribution, | -| | hence it needs to be installed. As an example see the | -| | /yardstick/tools/ directory for how to generate a Linux | -| | image with cyclictest included.) | -| | | -+--------------+--------------------------------------------------------------+ -|references | Cyclictest_ | -| | | -+--------------+--------------------------------------------------------------+ -|applicability | This test case is mainly for kvm4nfv project CI verify. | -| | Upgrade host linux kernel, boot a gust vm update it's linux | -| | kernel, and then run the cyclictest to test the new kernel | -| | is work well. | -| | | -+--------------+--------------------------------------------------------------+ -|pre-test | The test kernel rpm, test sequence scripts and test guest | -|conditions | image need put the right folders as specified in the test | -| | case yaml file. | -| | The test guest image needs with cyclictest included in it. | -| | | -| | No POD specific requirements have been identified. | -| | | -+--------------+--------------------------------------------------------------+ -|test sequence | description and expected result | -| | | -+--------------+--------------------------------------------------------------+ -|step 1 | The host and guest os kernel is upgraded. Cyclictest is | -| | invoked and logs are produced and stored. | -| | | -| | Result: Logs are stored. | -| | | -+--------------+--------------------------------------------------------------+ -|test verdict | Fails only if SLA is not passed, or if there is a test case | -| | execution problem. | -| | | -+--------------+--------------------------------------------------------------+ diff --git a/docs/userguide/opnfv_yardstick_tc037.rst b/docs/userguide/opnfv_yardstick_tc037.rst deleted file mode 100644 index 5a6e1eaae..000000000 --- a/docs/userguide/opnfv_yardstick_tc037.rst +++ /dev/null @@ -1,167 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Ericsson AB and others. - -************************************* -Yardstick Test Case Description TC037 -************************************* - -.. _cirros-image: https://download.cirros-cloud.net -.. _Ping: https://linux.die.net/man/8/ping -.. _pktgen: https://www.kernel.org/doc/Documentation/networking/pktgen.txt -.. _mpstat: http://www.linuxcommand.org/man_pages/mpstat1.html - -+-----------------------------------------------------------------------------+ -|Latency, CPU Load, Throughput, Packet Loss | -| | -+--------------+--------------------------------------------------------------+ -|test case id | OPNFV_YARDSTICK_TC037_LATENCY,CPU LOAD,THROUGHPUT, | -| | PACKET LOSS | -| | | -+--------------+--------------------------------------------------------------+ -|metric | Number of flows, latency, throughput, packet loss | -| | CPU utilization percentage, CPU interrupt per second | -| | | -+--------------+--------------------------------------------------------------+ -|test purpose | The purpose of TC037 is to evaluate the IaaS compute | -| | capacity and network performance with regards to CPU | -| | utilization, packet flows and network throughput, such as if | -| | and how different amounts of flows matter for the throughput | -| | between hosts on different compute blades, and the CPU load | -| | variation. | -| | | -| | Typically e.g. the performance of a vSwitch depends on the | -| | number of flows running through it. Also performance of | -| | other equipment or entities can depend on the number of | -| | flows or the packet sizes used | -| | | -| | The purpose is also to be able to spot the trends. | -| | Test results, graphs and similar shall be stored for | -| | comparison reasons and product evolution understanding | -| | between different OPNFV versions and/or configurations. | -| | | -+--------------+--------------------------------------------------------------+ -|test tool | Ping, Pktgen, mpstat | -| | | -| | Ping is a computer network administration software utility | -| | used to test the reachability of a host on an Internet | -| | Protocol (IP) network. It measures the round-trip time for | -| | packet sent from the originating host to a destination | -| | computer that are echoed back to the source. | -| | | -| | Linux packet generator is a tool to generate packets at very | -| | high speed in the kernel. pktgen is mainly used to drive and | -| | LAN equipment test network. pktgen supports multi threading. | -| | To generate random MAC address, IP address, port number UDP | -| | packets, pktgen uses multiple CPU processors in the | -| | different PCI bus (PCI, PCIe bus) with Gigabit Ethernet | -| | tested (pktgen performance depends on the CPU processing | -| | speed, memory delay, PCI bus speed hardware parameters), | -| | Transmit data rate can be even larger than 10GBit/s. Visible | -| | can satisfy most card test requirements. | -| | | -| | The mpstat command writes to standard output activities for | -| | each available processor, processor 0 being the first one. | -| | Global average activities among all processors are also | -| | reported. The mpstat command can be used both on SMP and UP | -| | machines, but in the latter, only global average activities | -| | will be printed. | -| | | -| | (Ping is normally part of any Linux distribution, hence it | -| | doesn't need to be installed. It is also part of the | -| | Yardstick Docker image. | -| | For example also a Cirros image can be downloaded from | -| | cirros-image_, it includes ping. | -| | | -| | Pktgen and mpstat are not always part of a Linux | -| | distribution, hence it needs to be installed. It is part of | -| | the Yardstick Docker image. | -| | As an example see the /yardstick/tools/ directory for how | -| | to generate a Linux image with pktgen and mpstat included.) | -| | | -+--------------+--------------------------------------------------------------+ -|test | This test case uses Pktgen to generate packet flow between | -|description | two hosts for simulating network workloads on the SUT. | -| | Ping packets (ICMP protocol's mandatory ECHO_REQUEST | -| | datagram) are sent from a host VM to the target VM(s) to | -| | elicit ICMP ECHO_RESPONSE, meanwhile CPU activities are | -| | monitored by mpstat. | -| | | -+--------------+--------------------------------------------------------------+ -|configuration | file: opnfv_yardstick_tc037.yaml | -| | | -| | Packet size is set to 64 bytes. | -| | Number of ports: 1, 10, 50, 100, 300, 500, 750 and 1000. | -| | The amount configured ports map from 2 up to 1001000 flows, | -| | respectively. Each port amount is run two times, for 20 | -| | seconds each. Then the next port_amount is run, and so on. | -| | During the test CPU load on both client and server, and the | -| | network latency between the client and server are measured. | -| | The client and server are distributed on different hardware. | -| | mpstat monitoring interval is set to 1 second. | -| | ping packet size is set to 100 bytes. | -| | For SLA max_ppm is set to 1000. | -| | | -+--------------+--------------------------------------------------------------+ -|applicability | Test can be configured with different: | -| | | -| | * pktgen packet sizes; | -| | * amount of flows; | -| | * test duration; | -| | * ping packet size; | -| | * mpstat monitor interval. | -| | | -| | Default values exist. | -| | | -| | SLA (optional): max_ppm: The number of packets per million | -| | packets sent that are acceptable to loose, not received. | -| | | -+--------------+--------------------------------------------------------------+ -|references | Ping_ | -| | | -| | mpstat_ | -| | | -| | pktgen_ | -| | | -| | ETSI-NFV-TST001 | -| | | -+--------------+--------------------------------------------------------------+ -|pre-test | The test case image needs to be installed into Glance | -|conditions | with pktgen, mpstat included in it. | -| | | -| | No POD specific requirements have been identified. | -| | | -+--------------+--------------------------------------------------------------+ -|test sequence | description and expected result | -| | | -+--------------+--------------------------------------------------------------+ -|step 1 | Two host VMs are booted, as server and client. | -| | | -+--------------+--------------------------------------------------------------+ -|step 2 | Yardstick is connected with the server VM by using ssh. | -| | 'pktgen_benchmark', "ping_benchmark" bash script are copyied | -| | from Jump Host to the server VM via the ssh tunnel. | -| | | -+--------------+--------------------------------------------------------------+ -|step 3 | An IP table is setup on server to monitor for received | -| | packets. | -| | | -+--------------+--------------------------------------------------------------+ -|step 4 | pktgen is invoked to generate packet flow between two server | -| | and client for simulating network workloads on the SUT. Ping | -| | is invoked. Ping packets are sent from server VM to client | -| | VM. mpstat is invoked, recording activities for each | -| | available processor. Results are processed and checked | -| | against the SLA. Logs are produced and stored. | -| | | -| | Result: Logs are stored. | -| | | -+--------------+--------------------------------------------------------------+ -|step 5 | Two host VMs are deleted. | -| | | -+--------------+--------------------------------------------------------------+ -|test verdict | Fails only if SLA is not passed, or if there is a test case | -| | execution problem. | -| | | -+--------------+--------------------------------------------------------------+ diff --git a/docs/userguide/opnfv_yardstick_tc038.rst b/docs/userguide/opnfv_yardstick_tc038.rst deleted file mode 100644 index 692c76819..000000000 --- a/docs/userguide/opnfv_yardstick_tc038.rst +++ /dev/null @@ -1,104 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Ericsson AB and others. - -************************************* -Yardstick Test Case Description TC038 -************************************* - -.. _cirros: https://download.cirros-cloud.net -.. _pktgen: https://www.kernel.org/doc/Documentation/networking/pktgen.txt - -+-----------------------------------------------------------------------------+ -|Latency, CPU Load, Throughput, Packet Loss (Extended measurements) | -| | -+--------------+--------------------------------------------------------------+ -|test case id | OPNFV_YARDSTICK_TC038_Latency,CPU Load,Throughput,Packet Loss| -| | | -+--------------+--------------------------------------------------------------+ -|metric | Number of flows, latency, throughput, CPU load, packet loss | -| | | -+--------------+--------------------------------------------------------------+ -|test purpose | To evaluate the IaaS network performance with regards to | -| | flows and throughput, such as if and how different amounts | -| | of flows matter for the throughput between hosts on different| -| | compute blades. Typically e.g. the performance of a vSwitch | -| | depends on the number of flows running through it. Also | -| | performance of other equipment or entities can depend | -| | on the number of flows or the packet sizes used. | -| | The purpose is also to be able to spot trends. Test results, | -| | graphs ans similar shall be stored for comparison reasons and| -| | product evolution understanding between different OPNFV | -| | versions and/or configurations. | -| | | -+--------------+--------------------------------------------------------------+ -|configuration | file: opnfv_yardstick_tc038.yaml | -| | | -| | Packet size: 64 bytes | -| | Number of ports: 1, 10, 50, 100, 300, 500, 750 and 1000. | -| | The amount configured ports map from 2 up to 1001000 flows, | -| | respectively. Each port amount is run ten times, for 20 | -| | seconds each. Then the next port_amount is run, and so on. | -| | During the test CPU load on both client and server, and the | -| | network latency between the client and server are measured. | -| | The client and server are distributed on different HW. | -| | For SLA max_ppm is set to 1000. | -| | | -+--------------+--------------------------------------------------------------+ -|test tool | pktgen | -| | | -| | (Pktgen is not always part of a Linux distribution, hence it | -| | needs to be installed. It is part of the Yardstick Glance | -| | image. | -| | As an example see the /yardstick/tools/ directory for how | -| | to generate a Linux image with pktgen included.) | -| | | -| | ping | -| | | -| | Ping is normally part of any Linux distribution, hence it | -| | doesn't need to be installed. It is also part of the | -| | Yardstick Glance image. | -| | (For example also a cirros_ image can be downloaded, it | -| | includes ping) | -| | | -| | mpstat | -| | | -| | (Mpstat is not always part of a Linux distribution, hence it | -| | needs to be installed. It is part of the Yardstick Glance | -| | image. | -| | | -+--------------+--------------------------------------------------------------+ -|references | Ping and Mpstat man pages | -| | | -| | pktgen_ | -| | | -| | ETSI-NFV-TST001 | -| | | -+--------------+--------------------------------------------------------------+ -|applicability | Test can be configured with different packet sizes, amount | -| | of flows and test duration. Default values exist. | -| | | -| | SLA (optional): max_ppm: The number of packets per million | -| | packets sent that are acceptable to loose, not received. | -| | | -+--------------+--------------------------------------------------------------+ -|pre-test | The test case image needs to be installed into Glance | -|conditions | with pktgen included in it. | -| | | -| | No POD specific requirements have been identified. | -| | | -+--------------+--------------------------------------------------------------+ -|test sequence | description and expected result | -| | | -+--------------+--------------------------------------------------------------+ -|step 1 | The hosts are installed, as server and client. pktgen is | -| | invoked and logs are produced and stored. | -| | | -| | Result: Logs are stored. | -| | | -+--------------+--------------------------------------------------------------+ -|test verdict | Fails only if SLA is not passed, or if there is a test case | -| | execution problem. | -| | | -+--------------+--------------------------------------------------------------+ diff --git a/docs/userguide/opnfv_yardstick_tc040.rst b/docs/userguide/opnfv_yardstick_tc040.rst deleted file mode 100644 index d62fbf787..000000000 --- a/docs/userguide/opnfv_yardstick_tc040.rst +++ /dev/null @@ -1,65 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Huawei Technologies Co.,Ltd and others. - -************************************* -Yardstick Test Case Description TC040 -************************************* - -.. _Parser: https://wiki.opnfv.org/parser - -+-----------------------------------------------------------------------------+ -|Verify Parser Yang-to-Tosca | -| | -+--------------+--------------------------------------------------------------+ -|test case id | OPNFV_YARDSTICK_TC040 Verify Parser Yang-to-Tosca | -| | | -+--------------+--------------------------------------------------------------+ -|metric | 1. tosca file which is converted from yang file by Parser | -| | 2. result whether the output is same with expected outcome | -+--------------+--------------------------------------------------------------+ -|test purpose | To verify the function of Yang-to-Tosca in Parser. | -| | | -+--------------+--------------------------------------------------------------+ -|configuration | file: opnfv_yardstick_tc040.yaml | -| | | -| | yangfile: the path of the yangfile which you want to convert | -| | toscafile: the path of the toscafile which is your expected | -| | outcome. | -| | | -+--------------+--------------------------------------------------------------+ -|test tool | Parser | -| | | -| | (Parser is not part of a Linux distribution, hence it | -| | needs to be installed. As an example see the | -| | /yardstick/benchmark/scenarios/parser/parser_setup.sh for | -| | how to install it manual. Of course, it will be installed | -| | and uninstalled automatically when you run this test case | -| | by yardstick) | -+--------------+--------------------------------------------------------------+ -|references | Parser_ | -| | | -| | | -+--------------+--------------------------------------------------------------+ -|applicability | Test can be configured with different path of yangfile and | -| | toscafile to fit your real environment to verify Parser | -| | | -+--------------+--------------------------------------------------------------+ -|pre-test | No POD specific requirements have been identified. | -|conditions | it can be run without VM | -| | | -+--------------+--------------------------------------------------------------+ -|test sequence | description and expected result | -| | | -+--------------+--------------------------------------------------------------+ -|step 1 | parser is installed without VM, running Yang-to-Tosca module | -| | to convert yang file to tosca file, validating output against| -| | expected outcome. | -| | | -| | Result: Logs are stored. | -+--------------+--------------------------------------------------------------+ -|test verdict | Fails only if output is different with expected outcome | -| | or if there is a test case execution problem. | -| | | -+--------------+--------------------------------------------------------------+ diff --git a/docs/userguide/opnfv_yardstick_tc042.rst b/docs/userguide/opnfv_yardstick_tc042.rst deleted file mode 100644 index 8660d9297..000000000 --- a/docs/userguide/opnfv_yardstick_tc042.rst +++ /dev/null @@ -1,87 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, ZTE and others. - -*************************************** -Yardstick Test Case Description TC0042 -*************************************** - -.. _DPDK: http://dpdk.org/doc/guides/index.html -.. _Testpmd: http://dpdk.org/doc/guides/testpmd_app_ug/index.html -.. _Pktgen-dpdk: http://pktgen.readthedocs.io/en/latest/index.html - -+-----------------------------------------------------------------------------+ -|Network Performance | -| | -+--------------+--------------------------------------------------------------+ -|test case id | OPNFV_YARDSTICK_TC042_DPDK pktgen latency measurements | -| | | -+--------------+--------------------------------------------------------------+ -|metric | L2 Network Latency | -| | | -+--------------+--------------------------------------------------------------+ -|test purpose | Measure L2 network latency when DPDK is enabled between hosts| -| | on different compute blades. | -| | | -+--------------+--------------------------------------------------------------+ -|configuration | file: opnfv_yardstick_tc042.yaml | -| | | -| | * Packet size: 64 bytes | -| | * SLA(max_latency): 100usec | -| | | -+--------------+--------------------------------------------------------------+ -|test tool | DPDK_ | -| | Pktgen-dpdk_ | -| | | -| | (DPDK and Pktgen-dpdk are not part of a Linux distribution, | -| | hence they needs to be installed. | -| | As an example see the /yardstick/tools/ directory for how to | -| | generate a Linux image with DPDK and pktgen-dpdk included.) | -| | | -+--------------+--------------------------------------------------------------+ -|references | DPDK_ | -| | | -| | Pktgen-dpdk_ | -| | | -| | ETSI-NFV-TST001 | -| | | -+--------------+--------------------------------------------------------------+ -|applicability | Test can be configured with different packet sizes. Default | -| | values exist. | -| | | -+--------------+--------------------------------------------------------------+ -|pre-test | The test case image needs to be installed into Glance | -|conditions | with DPDK and pktgen-dpdk included in it. | -| | | -| | The NICs of compute nodes must support DPDK on POD. | -| | | -| | And at least compute nodes setup hugepage. | -| | | -| | If you want to achievement a hight performance result, it is | -| | recommend to use NUAM, CPU pin, OVS and so on. | -| | | -+--------------+--------------------------------------------------------------+ -|test sequence | description and expected result | -| | | -+--------------+--------------------------------------------------------------+ -|step 1 | The hosts are installed on different blades, as server and | -| | client. Both server and client have three interfaces. The | -| | first one is management such as ssh. The other two are used | -| | by DPDK. | -| | | -+--------------+--------------------------------------------------------------+ -|step 2 | Testpmd_ is invoked with configurations to forward packets | -| | from one DPDK port to the other on server. | -| | | -+--------------+--------------------------------------------------------------+ -|step 3 | Pktgen-dpdk is invoked with configurations as a traffic | -| | generator and logs are produced and stored on client. | -| | | -| | Result: Logs are stored. | -| | | -+--------------+--------------------------------------------------------------+ -|test verdict | Fails only if SLA is not passed, or if there is a test case | -| | execution problem. | -| | | -+--------------+--------------------------------------------------------------+ diff --git a/docs/userguide/opnfv_yardstick_tc043.rst b/docs/userguide/opnfv_yardstick_tc043.rst deleted file mode 100644 index a873696dc..000000000 --- a/docs/userguide/opnfv_yardstick_tc043.rst +++ /dev/null @@ -1,102 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Huawei Technologies Co.,Ltd and others. - -************************************* -Yardstick Test Case Description TC043 -************************************* - -.. _cirros-image: https://download.cirros-cloud.net -.. _Ping: https://linux.die.net/man/8/ping - -+-----------------------------------------------------------------------------+ -|Network Latency Between NFVI Nodes | -| | -+--------------+--------------------------------------------------------------+ -|test case id | OPNFV_YARDSTICK_TC043_LATENCY_BETWEEN_NFVI_NODES | -| | | -+--------------+--------------------------------------------------------------+ -|metric | RTT (Round Trip Time) | -| | | -+--------------+--------------------------------------------------------------+ -|test purpose | The purpose of TC043 is to do a basic verification that | -| | network latency is within acceptable boundaries when packets | -| | travel between different NFVI nodes. | -| | | -| | The purpose is also to be able to spot the trends. | -| | Test results, graphs and similar shall be stored for | -| | comparison reasons and product evolution understanding | -| | between different OPNFV versions and/or configurations. | -| | | -+--------------+--------------------------------------------------------------+ -|test tool | ping | -| | | -| | Ping is a computer network administration software utility | -| | used to test the reachability of a host on an Internet | -| | Protocol (IP) network. It measures the round-trip time for | -| | packet sent from the originating host to a destination | -| | computer that are echoed back to the source. | -| | | -+--------------+--------------------------------------------------------------+ -|test topology | Ping packets (ICMP protocol's mandatory ECHO_REQUEST | -| | datagram) are sent from host node to target node to elicit | -| | ICMP ECHO_RESPONSE. | -| | | -+--------------+--------------------------------------------------------------+ -|configuration | file: opnfv_yardstick_tc043.yaml | -| | | -| | Packet size 100 bytes. Total test duration 600 seconds. | -| | One ping each 10 seconds. SLA RTT is set to maximum 10 ms. | -| | | -+--------------+--------------------------------------------------------------+ -|applicability | This test case can be configured with different: | -| | | -| | * packet sizes; | -| | * burst sizes; | -| | * ping intervals; | -| | * test durations; | -| | * test iterations. | -| | | -| | Default values exist. | -| | | -| | SLA is optional. The SLA in this test case serves as an | -| | example. Considerably lower RTT is expected, and also normal | -| | to achieve in balanced L2 environments. However, to cover | -| | most configurations, both bare metal and fully virtualized | -| | ones, this value should be possible to achieve and | -| | acceptable for black box testing. Many real time | -| | applications start to suffer badly if the RTT time is higher | -| | than this. Some may suffer bad also close to this RTT, while | -| | others may not suffer at all. It is a compromise that may | -| | have to be tuned for different configuration purposes. | -| | | -+--------------+--------------------------------------------------------------+ -|references | Ping_ | -| | | -| | ETSI-NFV-TST001 | -| | | -+--------------+--------------------------------------------------------------+ -|pre_test | Each pod node must have ping included in it. | -|conditions | | -| | | -+--------------+--------------------------------------------------------------+ -|test sequence | description and expected result | -| | | -+--------------+--------------------------------------------------------------+ -|step 1 | Yardstick is connected with the NFVI node by using ssh. | -| | 'ping_benchmark' bash script is copyied from Jump Host to | -| | the NFVI node via the ssh tunnel. | -| | | -+--------------+--------------------------------------------------------------+ -|step 2 | Ping is invoked. Ping packets are sent from server node to | -| | client node. RTT results are calculated and checked against | -| | the SLA. Logs are produced and stored. | -| | | -| | Result: Logs are stored. | -| | | -+--------------+--------------------------------------------------------------+ -|test verdict | Test should not PASS if any RTT is above the optional SLA | -| | value, or if there is a test case execution problem. | -| | | -+--------------+--------------------------------------------------------------+ diff --git a/docs/userguide/opnfv_yardstick_tc044.rst b/docs/userguide/opnfv_yardstick_tc044.rst deleted file mode 100644 index 2be8517a1..000000000 --- a/docs/userguide/opnfv_yardstick_tc044.rst +++ /dev/null @@ -1,82 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Huawei Technologies Co.,Ltd and others. - -************************************* -Yardstick Test Case Description TC044 -************************************* - -.. _man-pages: http://manpages.ubuntu.com/manpages/trusty/en/man1/free.1.html - -+-----------------------------------------------------------------------------+ -|Memory Utilization | -| | -+--------------+--------------------------------------------------------------+ -|test case id | OPNFV_YARDSTICK_TC044_Memory Utilization | -| | | -+--------------+--------------------------------------------------------------+ -|metric | Memory utilization | -| | | -+--------------+--------------------------------------------------------------+ -|test purpose | To evaluate the IaaS compute capability with regards to | -| | memory utilization.This test case should be run in parallel | -| | to other Yardstick test cases and not run as a stand-alone | -| | test case. | -| | Measure the memory usage statistics including used memory, | -| | free memory, buffer, cache and shared memory. | -| | Both average and maximun values are obtained. | -| | The purpose is also to be able to spot trends. | -| | Test results, graphs and similar shall be stored for | -| | comparison reasons and product evolution understanding | -| | between different OPNFV versions and/or configurations. | -| | | -+--------------+--------------------------------------------------------------+ -|configuration | File: memload.yaml (in the 'samples' directory) | -| | | -| | * interval: 1 - repeat, pausing every 1 seconds in-between. | -| | * count: 10 - display statistics 10 times, then exit. | -| | | -+--------------+--------------------------------------------------------------+ -|test tool | free | -| | | -| | free provides information about unused and used memory and | -| | swap space on any computer running Linux or another Unix-like| -| | operating system. | -| | free is normally part of a Linux distribution, hence it | -| | doesn't needs to be installed. | -| | | -+--------------+--------------------------------------------------------------+ -|references | man-pages_ | -| | | -| | ETSI-NFV-TST001 | -| | | -+--------------+--------------------------------------------------------------+ -|applicability | Test can be configured with different: | -| | | -| | * interval; | -| | * count; | -| | * runner Iteration and intervals. | -| | | -| | There are default values for each above-mentioned option. | -| | Run in background with other test cases. | -| | | -+--------------+--------------------------------------------------------------+ -|pre-test | The test case image needs to be installed into Glance | -|conditions | with free included in the image. | -| | | -| | No POD specific requirements have been identified. | -| | | -+--------------+--------------------------------------------------------------+ -|test sequence | description and expected result | -| | | -+--------------+--------------------------------------------------------------+ -|step 1 | The host is installed as client. The related TC, or TCs, is | -| | invoked and free logs are produced and stored. | -| | | -| | Result: logs are stored. | -| | | -+--------------+--------------------------------------------------------------+ -|test verdict | None. Memory utilization results are fetched and stored. | -| | | -+--------------+--------------------------------------------------------------+ diff --git a/docs/userguide/opnfv_yardstick_tc045.rst b/docs/userguide/opnfv_yardstick_tc045.rst deleted file mode 100644 index 0b0993c34..000000000 --- a/docs/userguide/opnfv_yardstick_tc045.rst +++ /dev/null @@ -1,139 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Yin Kanglin and others. -.. 14_ykl@tongji.edu.cn - -************************************* -Yardstick Test Case Description TC045 -************************************* - -+-----------------------------------------------------------------------------+ -|Control Node Openstack Service High Availability - Neutron Server | -| | -+--------------+--------------------------------------------------------------+ -|test case id | OPNFV_YARDSTICK_TC045: Control node Openstack service down - | -| | neutron server | -+--------------+--------------------------------------------------------------+ -|test purpose | This test case will verify the high availability of the | -| | network service provided by OpenStack (neutro-server) on | -| | control node. | -| | | -+--------------+--------------------------------------------------------------+ -|test method | This test case kills the processes of neutron-server service | -| | on a selected control node, then checks whether the request | -| | of the related Openstack command is OK and the killed | -| | processes are recovered. | -| | | -+--------------+--------------------------------------------------------------+ -|attackers | In this test case, an attacker called "kill-process" is | -| | needed. This attacker includes three parameters: | -| | 1) fault_type: which is used for finding the attacker's | -| | scripts. It should be always set to "kill-process" in this | -| | test case. | -| | 2) process_name: which is the process name of the specified | -| | OpenStack service. If there are multiple processes use the | -| | same name on the host, all of them are killed by this | -| | attacker. | -| | In this case. This parameter should always set to "neutron- | -| | server". | -| | 3) host: which is the name of a control node being attacked. | -| | | -| | e.g. | -| | -fault_type: "kill-process" | -| | -process_name: "neutron-server" | -| | -host: node1 | -| | | -+--------------+--------------------------------------------------------------+ -|monitors | In this test case, two kinds of monitor are needed: | -| | 1. the "openstack-cmd" monitor constantly request a specific | -| | Openstack command, which needs two parameters: | -| | 1) monitor_type: which is used for finding the monitor class | -| | and related scritps. It should be always set to | -| | "openstack-cmd" for this monitor. | -| | 2) command_name: which is the command name used for request. | -| | In this case, the command name should be neutron related | -| | commands. | -| | | -| | 2. the "process" monitor check whether a process is running | -| | on a specific node, which needs three parameters: | -| | 1) monitor_type: which used for finding the monitor class and| -| | related scritps. It should be always set to "process" | -| | for this monitor. | -| | 2) process_name: which is the process name for monitor | -| | 3) host: which is the name of the node runing the process | -| | | -| | e.g. | -| | monitor1: | -| | -monitor_type: "openstack-cmd" | -| | -command_name: "neutron agent-list" | -| | monitor2: | -| | -monitor_type: "process" | -| | -process_name: "neutron-server" | -| | -host: node1 | -| | | -+--------------+--------------------------------------------------------------+ -|metrics | In this test case, there are two metrics: | -| | 1)service_outage_time: which indicates the maximum outage | -| | time (seconds) of the specified Openstack command request. | -| | 2)process_recover_time: which indicates the maximun time | -| | (seconds) from the process being killed to recovered | -| | | -+--------------+--------------------------------------------------------------+ -|test tool | Developed by the project. Please see folder: | -| | "yardstick/benchmark/scenarios/availability/ha_tools" | -| | | -+--------------+--------------------------------------------------------------+ -|references | ETSI NFV REL001 | -| | | -+--------------+--------------------------------------------------------------+ -|configuration | This test case needs two configuration files: | -| | 1) test case file: opnfv_yardstick_tc045.yaml | -| | -Attackers: see above "attackers" discription | -| | -waiting_time: which is the time (seconds) from the process | -| | being killed to stoping monitors the monitors | -| | -Monitors: see above "monitors" discription | -| | -SLA: see above "metrics" discription | -| | | -| | 2)POD file: pod.yaml | -| | The POD configuration should record on pod.yaml first. | -| | the "host" item in this test case will use the node name in | -| | the pod.yaml. | -| | | -+--------------+--------------------------------------------------------------+ -|test sequence | description and expected result | -| | | -+--------------+--------------------------------------------------------------+ -|step 1 | start monitors: | -| | each monitor will run with independently process | -| | | -| | Result: The monitor info will be collected. | -| | | -+--------------+--------------------------------------------------------------+ -|step 2 | do attacker: connect the host through SSH, and then execute | -| | the kill process script with param value specified by | -| | "process_name" | -| | | -| | Result: Process will be killed. | -| | | -+--------------+--------------------------------------------------------------+ -|step 3 | stop monitors after a period of time specified by | -| | "waiting_time" | -| | | -| | Result: The monitor info will be aggregated. | -| | | -+--------------+--------------------------------------------------------------+ -|step 4 | verify the SLA | -| | | -| | Result: The test case is passed or not. | -| | | -+--------------+--------------------------------------------------------------+ -|post-action | It is the action when the test cases exist. It will check the| -| | status of the specified process on the host, and restart the | -| | process if it is not running for next test cases | -| | | -+--------------+--------------------------------------------------------------+ -|test verdict | Fails only if SLA is not passed, or if there is a test case | -| | execution problem. | -| | | -+--------------+--------------------------------------------------------------+ diff --git a/docs/userguide/opnfv_yardstick_tc046.rst b/docs/userguide/opnfv_yardstick_tc046.rst deleted file mode 100644 index cce6c6884..000000000 --- a/docs/userguide/opnfv_yardstick_tc046.rst +++ /dev/null @@ -1,138 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Yin Kanglin and others. -.. 14_ykl@tongji.edu.cn - -************************************* -Yardstick Test Case Description TC046 -************************************* - -+-----------------------------------------------------------------------------+ -|Control Node Openstack Service High Availability - Keystone | -| | -+--------------+--------------------------------------------------------------+ -|test case id | OPNFV_YARDSTICK_TC046: Control node Openstack service down - | -| | keystone | -+--------------+--------------------------------------------------------------+ -|test purpose | This test case will verify the high availability of the | -| | user service provided by OpenStack (keystone) on control | -| | node. | -| | | -+--------------+--------------------------------------------------------------+ -|test method | This test case kills the processes of keystone service on a | -| | selected control node, then checks whether the request of | -| | the related Openstack command is OK and the killed processes | -| | are recovered. | -| | | -+--------------+--------------------------------------------------------------+ -|attackers | In this test case, an attacker called "kill-process" is | -| | needed. This attacker includes three parameters: | -| | 1) fault_type: which is used for finding the attacker's | -| | scripts. It should be always set to "kill-process" in this | -| | test case. | -| | 2) process_name: which is the process name of the specified | -| | OpenStack service. If there are multiple processes use the | -| | same name on the host, all of them are killed by this | -| | attacker. | -| | In this case. This parameter should always set to "keystone" | -| | 3) host: which is the name of a control node being attacked. | -| | | -| | e.g. | -| | -fault_type: "kill-process" | -| | -process_name: "keystone" | -| | -host: node1 | -| | | -+--------------+--------------------------------------------------------------+ -|monitors | In this test case, two kinds of monitor are needed: | -| | 1. the "openstack-cmd" monitor constantly request a specific | -| | Openstack command, which needs two parameters: | -| | 1) monitor_type: which is used for finding the monitor class | -| | and related scritps. It should be always set to | -| | "openstack-cmd" for this monitor. | -| | 2) command_name: which is the command name used for request. | -| | In this case, the command name should be keystone related | -| | commands. | -| | | -| | 2. the "process" monitor check whether a process is running | -| | on a specific node, which needs three parameters: | -| | 1) monitor_type: which used for finding the monitor class and| -| | related scritps. It should be always set to "process" | -| | for this monitor. | -| | 2) process_name: which is the process name for monitor | -| | 3) host: which is the name of the node runing the process | -| | | -| | e.g. | -| | monitor1: | -| | -monitor_type: "openstack-cmd" | -| | -command_name: "keystone user-list" | -| | monitor2: | -| | -monitor_type: "process" | -| | -process_name: "keystone" | -| | -host: node1 | -| | | -+--------------+--------------------------------------------------------------+ -|metrics | In this test case, there are two metrics: | -| | 1)service_outage_time: which indicates the maximum outage | -| | time (seconds) of the specified Openstack command request. | -| | 2)process_recover_time: which indicates the maximun time | -| | (seconds) from the process being killed to recovered | -| | | -+--------------+--------------------------------------------------------------+ -|test tool | Developed by the project. Please see folder: | -| | "yardstick/benchmark/scenarios/availability/ha_tools" | -| | | -+--------------+--------------------------------------------------------------+ -|references | ETSI NFV REL001 | -| | | -+--------------+--------------------------------------------------------------+ -|configuration | This test case needs two configuration files: | -| | 1) test case file: opnfv_yardstick_tc046.yaml | -| | -Attackers: see above "attackers" discription | -| | -waiting_time: which is the time (seconds) from the process | -| | being killed to stoping monitors the monitors | -| | -Monitors: see above "monitors" discription | -| | -SLA: see above "metrics" discription | -| | | -| | 2)POD file: pod.yaml | -| | The POD configuration should record on pod.yaml first. | -| | the "host" item in this test case will use the node name in | -| | the pod.yaml. | -| | | -+--------------+--------------------------------------------------------------+ -|test sequence | description and expected result | -| | | -+--------------+--------------------------------------------------------------+ -|step 1 | start monitors: | -| | each monitor will run with independently process | -| | | -| | Result: The monitor info will be collected. | -| | | -+--------------+--------------------------------------------------------------+ -|step 2 | do attacker: connect the host through SSH, and then execute | -| | the kill process script with param value specified by | -| | "process_name" | -| | | -| | Result: Process will be killed. | -| | | -+--------------+--------------------------------------------------------------+ -|step 3 | stop monitors after a period of time specified by | -| | "waiting_time" | -| | | -| | Result: The monitor info will be aggregated. | -| | | -+--------------+--------------------------------------------------------------+ -|step 4 | verify the SLA | -| | | -| | Result: The test case is passed or not. | -| | | -+--------------+--------------------------------------------------------------+ -|post-action | It is the action when the test cases exist. It will check the| -| | status of the specified process on the host, and restart the | -| | process if it is not running for next test cases | -| | | -+--------------+--------------------------------------------------------------+ -|test verdict | Fails only if SLA is not passed, or if there is a test case | -| | execution problem. | -| | | -+--------------+--------------------------------------------------------------+ diff --git a/docs/userguide/opnfv_yardstick_tc047.rst b/docs/userguide/opnfv_yardstick_tc047.rst deleted file mode 100644 index 95158cfd6..000000000 --- a/docs/userguide/opnfv_yardstick_tc047.rst +++ /dev/null @@ -1,139 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Yin Kanglin and others. -.. 14_ykl@tongji.edu.cn - -************************************* -Yardstick Test Case Description TC047 -************************************* - -+-----------------------------------------------------------------------------+ -|Control Node Openstack Service High Availability - Glance Api | -| | -+--------------+--------------------------------------------------------------+ -|test case id | OPNFV_YARDSTICK_TC047: Control node Openstack service down - | -| | glance api | -+--------------+--------------------------------------------------------------+ -|test purpose | This test case will verify the high availability of the | -| | image service provided by OpenStack (glance-api) on control | -| | node. | -| | | -+--------------+--------------------------------------------------------------+ -|test method | This test case kills the processes of glance-api service on | -| | a selected control node, then checks whether the request of | -| | the related Openstack command is OK and the killed processes | -| | are recovered. | -| | | -+--------------+--------------------------------------------------------------+ -|attackers | In this test case, an attacker called "kill-process" is | -| | needed. This attacker includes three parameters: | -| | 1) fault_type: which is used for finding the attacker's | -| | scripts. It should be always set to "kill-process" in this | -| | test case. | -| | 2) process_name: which is the process name of the specified | -| | OpenStack service. If there are multiple processes use the | -| | same name on the host, all of them are killed by this | -| | attacker. | -| | In this case. This parameter should always set to "glance- | -| | api". | -| | 3) host: which is the name of a control node being attacked. | -| | | -| | e.g. | -| | -fault_type: "kill-process" | -| | -process_name: "glance-api" | -| | -host: node1 | -| | | -+--------------+--------------------------------------------------------------+ -|monitors | In this test case, two kinds of monitor are needed: | -| | 1. the "openstack-cmd" monitor constantly request a specific | -| | Openstack command, which needs two parameters: | -| | 1) monitor_type: which is used for finding the monitor class | -| | and related scritps. It should be always set to | -| | "openstack-cmd" for this monitor. | -| | 2) command_name: which is the command name used for request. | -| | In this case, the command name should be glance related | -| | commands. | -| | | -| | 2. the "process" monitor check whether a process is running | -| | on a specific node, which needs three parameters: | -| | 1) monitor_type: which used for finding the monitor class and| -| | related scritps. It should be always set to "process" | -| | for this monitor. | -| | 2) process_name: which is the process name for monitor | -| | 3) host: which is the name of the node runing the process | -| | | -| | e.g. | -| | monitor1: | -| | -monitor_type: "openstack-cmd" | -| | -command_name: "glance image-list" | -| | monitor2: | -| | -monitor_type: "process" | -| | -process_name: "glance-api" | -| | -host: node1 | -| | | -+--------------+--------------------------------------------------------------+ -|metrics | In this test case, there are two metrics: | -| | 1)service_outage_time: which indicates the maximum outage | -| | time (seconds) of the specified Openstack command request. | -| | 2)process_recover_time: which indicates the maximun time | -| | (seconds) from the process being killed to recovered | -| | | -+--------------+--------------------------------------------------------------+ -|test tool | Developed by the project. Please see folder: | -| | "yardstick/benchmark/scenarios/availability/ha_tools" | -| | | -+--------------+--------------------------------------------------------------+ -|references | ETSI NFV REL001 | -| | | -+--------------+--------------------------------------------------------------+ -|configuration | This test case needs two configuration files: | -| | 1) test case file: opnfv_yardstick_tc047.yaml | -| | -Attackers: see above "attackers" discription | -| | -waiting_time: which is the time (seconds) from the process | -| | being killed to stoping monitors the monitors | -| | -Monitors: see above "monitors" discription | -| | -SLA: see above "metrics" discription | -| | | -| | 2)POD file: pod.yaml | -| | The POD configuration should record on pod.yaml first. | -| | the "host" item in this test case will use the node name in | -| | the pod.yaml. | -| | | -+--------------+--------------------------------------------------------------+ -|test sequence | description and expected result | -| | | -+--------------+--------------------------------------------------------------+ -|step 1 | start monitors: | -| | each monitor will run with independently process | -| | | -| | Result: The monitor info will be collected. | -| | | -+--------------+--------------------------------------------------------------+ -|step 2 | do attacker: connect the host through SSH, and then execute | -| | the kill process script with param value specified by | -| | "process_name" | -| | | -| | Result: Process will be killed. | -| | | -+--------------+--------------------------------------------------------------+ -|step 3 | stop monitors after a period of time specified by | -| | "waiting_time" | -| | | -| | Result: The monitor info will be aggregated. | -| | | -+--------------+--------------------------------------------------------------+ -|step 4 | verify the SLA | -| | | -| | Result: The test case is passed or not. | -| | | -+--------------+--------------------------------------------------------------+ -|post-action | It is the action when the test cases exist. It will check the| -| | status of the specified process on the host, and restart the | -| | process if it is not running for next test cases | -| | | -+--------------+--------------------------------------------------------------+ -|test verdict | Fails only if SLA is not passed, or if there is a test case | -| | execution problem. | -| | | -+--------------+--------------------------------------------------------------+ diff --git a/docs/userguide/opnfv_yardstick_tc048.rst b/docs/userguide/opnfv_yardstick_tc048.rst deleted file mode 100644 index 21c00d1fe..000000000 --- a/docs/userguide/opnfv_yardstick_tc048.rst +++ /dev/null @@ -1,139 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Yin Kanglin and others. -.. 14_ykl@tongji.edu.cn - -************************************* -Yardstick Test Case Description TC048 -************************************* - -+-----------------------------------------------------------------------------+ -|Control Node Openstack Service High Availability - Cinder Api | -| | -+--------------+--------------------------------------------------------------+ -|test case id | OPNFV_YARDSTICK_TC048: Control node Openstack service down - | -| | cinder api | -+--------------+--------------------------------------------------------------+ -|test purpose | This test case will verify the high availability of the | -| | volume service provided by OpenStack (cinder-api) on control | -| | node. | -| | | -+--------------+--------------------------------------------------------------+ -|test method | This test case kills the processes of cinder-api service on | -| | a selected control node, then checks whether the request of | -| | the related Openstack command is OK and the killed processes | -| | are recovered. | -| | | -+--------------+--------------------------------------------------------------+ -|attackers | In this test case, an attacker called "kill-process" is | -| | needed. This attacker includes three parameters: | -| | 1) fault_type: which is used for finding the attacker's | -| | scripts. It should be always set to "kill-process" in this | -| | test case. | -| | 2) process_name: which is the process name of the specified | -| | OpenStack service. If there are multiple processes use the | -| | same name on the host, all of them are killed by this | -| | attacker. | -| | In this case. This parameter should always set to "cinder- | -| | api". | -| | 3) host: which is the name of a control node being attacked. | -| | | -| | e.g. | -| | -fault_type: "kill-process" | -| | -process_name: "cinder-api" | -| | -host: node1 | -| | | -+--------------+--------------------------------------------------------------+ -|monitors | In this test case, two kinds of monitor are needed: | -| | 1. the "openstack-cmd" monitor constantly request a specific | -| | Openstack command, which needs two parameters: | -| | 1) monitor_type: which is used for finding the monitor class | -| | and related scritps. It should be always set to | -| | "openstack-cmd" for this monitor. | -| | 2) command_name: which is the command name used for request. | -| | In this case, the command name should be cinder related | -| | commands. | -| | | -| | 2. the "process" monitor check whether a process is running | -| | on a specific node, which needs three parameters: | -| | 1) monitor_type: which used for finding the monitor class and| -| | related scritps. It should be always set to "process" | -| | for this monitor. | -| | 2) process_name: which is the process name for monitor | -| | 3) host: which is the name of the node runing the process | -| | | -| | e.g. | -| | monitor1: | -| | -monitor_type: "openstack-cmd" | -| | -command_name: "cinder list" | -| | monitor2: | -| | -monitor_type: "process" | -| | -process_name: "cinder-api" | -| | -host: node1 | -| | | -+--------------+--------------------------------------------------------------+ -|metrics | In this test case, there are two metrics: | -| | 1)service_outage_time: which indicates the maximum outage | -| | time (seconds) of the specified Openstack command request. | -| | 2)process_recover_time: which indicates the maximun time | -| | (seconds) from the process being killed to recovered | -| | | -+--------------+--------------------------------------------------------------+ -|test tool | Developed by the project. Please see folder: | -| | "yardstick/benchmark/scenarios/availability/ha_tools" | -| | | -+--------------+--------------------------------------------------------------+ -|references | ETSI NFV REL001 | -| | | -+--------------+--------------------------------------------------------------+ -|configuration | This test case needs two configuration files: | -| | 1) test case file: opnfv_yardstick_tc048.yaml | -| | -Attackers: see above "attackers" discription | -| | -waiting_time: which is the time (seconds) from the process | -| | being killed to stoping monitors the monitors | -| | -Monitors: see above "monitors" discription | -| | -SLA: see above "metrics" discription | -| | | -| | 2)POD file: pod.yaml | -| | The POD configuration should record on pod.yaml first. | -| | the "host" item in this test case will use the node name in | -| | the pod.yaml. | -| | | -+--------------+--------------------------------------------------------------+ -|test sequence | description and expected result | -| | | -+--------------+--------------------------------------------------------------+ -|step 1 | start monitors: | -| | each monitor will run with independently process | -| | | -| | Result: The monitor info will be collected. | -| | | -+--------------+--------------------------------------------------------------+ -|step 2 | do attacker: connect the host through SSH, and then execute | -| | the kill process script with param value specified by | -| | "process_name" | -| | | -| | Result: Process will be killed. | -| | | -+--------------+--------------------------------------------------------------+ -|step 3 | stop monitors after a period of time specified by | -| | "waiting_time" | -| | | -| | Result: The monitor info will be aggregated. | -| | | -+--------------+--------------------------------------------------------------+ -|step 4 | verify the SLA | -| | | -| | Result: The test case is passed or not. | -| | | -+--------------+--------------------------------------------------------------+ -|post-action | It is the action when the test cases exist. It will check the| -| | status of the specified process on the host, and restart the | -| | process if it is not running for next test cases | -| | | -+--------------+--------------------------------------------------------------+ -|test verdict | Fails only if SLA is not passed, or if there is a test case | -| | execution problem. | -| | | -+--------------+--------------------------------------------------------------+ diff --git a/docs/userguide/opnfv_yardstick_tc049.rst b/docs/userguide/opnfv_yardstick_tc049.rst deleted file mode 100644 index f58bb9989..000000000 --- a/docs/userguide/opnfv_yardstick_tc049.rst +++ /dev/null @@ -1,139 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Yin Kanglin and others. -.. 14_ykl@tongji.edu.cn - -************************************* -Yardstick Test Case Description TC049 -************************************* - -+-----------------------------------------------------------------------------+ -|Control Node Openstack Service High Availability - Swift Proxy | -| | -+--------------+--------------------------------------------------------------+ -|test case id | OPNFV_YARDSTICK_TC049: Control node Openstack service down - | -| | swift proxy | -+--------------+--------------------------------------------------------------+ -|test purpose | This test case will verify the high availability of the | -| | storage service provided by OpenStack (swift-proxy) on | -| | control node. | -| | | -+--------------+--------------------------------------------------------------+ -|test method | This test case kills the processes of swift-proxy service on | -| | a selected control node, then checks whether the request of | -| | the related Openstack command is OK and the killed processes | -| | are recovered. | -| | | -+--------------+--------------------------------------------------------------+ -|attackers | In this test case, an attacker called "kill-process" is | -| | needed. This attacker includes three parameters: | -| | 1) fault_type: which is used for finding the attacker's | -| | scripts. It should be always set to "kill-process" in this | -| | test case. | -| | 2) process_name: which is the process name of the specified | -| | OpenStack service. If there are multiple processes use the | -| | same name on the host, all of them are killed by this | -| | attacker. | -| | In this case. This parameter should always set to "swift- | -| | proxy". | -| | 3) host: which is the name of a control node being attacked. | -| | | -| | e.g. | -| | -fault_type: "kill-process" | -| | -process_name: "swift-proxy" | -| | -host: node1 | -| | | -+--------------+--------------------------------------------------------------+ -|monitors | In this test case, two kinds of monitor are needed: | -| | 1. the "openstack-cmd" monitor constantly request a specific | -| | Openstack command, which needs two parameters: | -| | 1) monitor_type: which is used for finding the monitor class | -| | and related scritps. It should be always set to | -| | "openstack-cmd" for this monitor. | -| | 2) command_name: which is the command name used for request. | -| | In this case, the command name should be swift related | -| | commands. | -| | | -| | 2. the "process" monitor check whether a process is running | -| | on a specific node, which needs three parameters: | -| | 1) monitor_type: which used for finding the monitor class and| -| | related scritps. It should be always set to "process" | -| | for this monitor. | -| | 2) process_name: which is the process name for monitor | -| | 3) host: which is the name of the node runing the process | -| | | -| | e.g. | -| | monitor1: | -| | -monitor_type: "openstack-cmd" | -| | -command_name: "swift stat" | -| | monitor2: | -| | -monitor_type: "process" | -| | -process_name: "swift-proxy" | -| | -host: node1 | -| | | -+--------------+--------------------------------------------------------------+ -|metrics | In this test case, there are two metrics: | -| | 1)service_outage_time: which indicates the maximum outage | -| | time (seconds) of the specified Openstack command request. | -| | 2)process_recover_time: which indicates the maximun time | -| | (seconds) from the process being killed to recovered | -| | | -+--------------+--------------------------------------------------------------+ -|test tool | Developed by the project. Please see folder: | -| | "yardstick/benchmark/scenarios/availability/ha_tools" | -| | | -+--------------+--------------------------------------------------------------+ -|references | ETSI NFV REL001 | -| | | -+--------------+--------------------------------------------------------------+ -|configuration | This test case needs two configuration files: | -| | 1) test case file: opnfv_yardstick_tc049.yaml | -| | -Attackers: see above "attackers" discription | -| | -waiting_time: which is the time (seconds) from the process | -| | being killed to stoping monitors the monitors | -| | -Monitors: see above "monitors" discription | -| | -SLA: see above "metrics" discription | -| | | -| | 2)POD file: pod.yaml | -| | The POD configuration should record on pod.yaml first. | -| | the "host" item in this test case will use the node name in | -| | the pod.yaml. | -| | | -+--------------+--------------------------------------------------------------+ -|test sequence | description and expected result | -| | | -+--------------+--------------------------------------------------------------+ -|step 1 | start monitors: | -| | each monitor will run with independently process | -| | | -| | Result: The monitor info will be collected. | -| | | -+--------------+--------------------------------------------------------------+ -|step 2 | do attacker: connect the host through SSH, and then execute | -| | the kill process script with param value specified by | -| | "process_name" | -| | | -| | Result: Process will be killed. | -| | | -+--------------+--------------------------------------------------------------+ -|step 3 | stop monitors after a period of time specified by | -| | "waiting_time" | -| | | -| | Result: The monitor info will be aggregated. | -| | | -+--------------+--------------------------------------------------------------+ -|step 4 | verify the SLA | -| | | -| | Result: The test case is passed or not. | -| | | -+--------------+--------------------------------------------------------------+ -|post-action | It is the action when the test cases exist. It will check the| -| | status of the specified process on the host, and restart the | -| | process if it is not running for next test cases | -| | | -+--------------+--------------------------------------------------------------+ -|test verdict | Fails only if SLA is not passed, or if there is a test case | -| | execution problem. | -| | | -+--------------+--------------------------------------------------------------+ diff --git a/docs/userguide/opnfv_yardstick_tc050.rst b/docs/userguide/opnfv_yardstick_tc050.rst deleted file mode 100644 index 8890c9d53..000000000 --- a/docs/userguide/opnfv_yardstick_tc050.rst +++ /dev/null @@ -1,135 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Yin Kanglin and others. -.. 14_ykl@tongji.edu.cn - -************************************* -Yardstick Test Case Description TC050 -************************************* - -+-----------------------------------------------------------------------------+ -|OpenStack Controller Node Network High Availability | -| | -+--------------+--------------------------------------------------------------+ -|test case id | OPNFV_YARDSTICK_TC050: OpenStack Controller Node Network | -| | High Availability | -+--------------+--------------------------------------------------------------+ -|test purpose | This test case will verify the high availability of control | -| | node. When one of the controller failed to connect the | -| | network, which breaks down the Openstack services on this | -| | node. These Openstack service should able to be accessed by | -| | other controller nodes, and the services on failed | -| | controller node should be isolated. | -+--------------+--------------------------------------------------------------+ -|test method | This test case turns off the network interfaces of a | -| | specified control node, then checks whether all services | -| | provided by the control node are OK with some monitor tools. | -+--------------+--------------------------------------------------------------+ -|attackers | In this test case, an attacker called "close-interface" is | -| | needed. This attacker includes three parameters: | -| | 1) fault_type: which is used for finding the attacker's | -| | scripts. It should be always set to "close-interface" in | -| | this test case. | -| | 2) host: which is the name of a control node being attacked. | -| | 3) interface: the network interface to be turned off. | -| | | -| | There are four instance of the "close-interface" monitor: | -| | attacker1(for public netork): | -| | -fault_type: "close-interface" | -| | -host: node1 | -| | -interface: "br-ex" | -| | attacker2(for management netork): | -| | -fault_type: "close-interface" | -| | -host: node1 | -| | -interface: "br-mgmt" | -| | attacker3(for storage netork): | -| | -fault_type: "close-interface" | -| | -host: node1 | -| | -interface: "br-storage" | -| | attacker4(for private netork): | -| | -fault_type: "close-interface" | -| | -host: node1 | -| | -interface: "br-mesh" | -+--------------+--------------------------------------------------------------+ -|monitors | In this test case, the monitor named "openstack-cmd" is | -| | needed. The monitor needs needs two parameters: | -| | 1) monitor_type: which is used for finding the monitor class | -| | and related scritps. It should be always set to | -| | "openstack-cmd" for this monitor. | -| | 2) command_name: which is the command name used for request | -| | | -| | There are four instance of the "openstack-cmd" monitor: | -| | monitor1: | -| | -monitor_type: "openstack-cmd" | -| | -command_name: "nova image-list" | -| | monitor2: | -| | -monitor_type: "openstack-cmd" | -| | -command_name: "neutron router-list" | -| | monitor3: | -| | -monitor_type: "openstack-cmd" | -| | -command_name: "heat stack-list" | -| | monitor4: | -| | -monitor_type: "openstack-cmd" | -| | -command_name: "cinder list" | -+--------------+--------------------------------------------------------------+ -|metrics | In this test case, there is one metric: | -| | 1)service_outage_time: which indicates the maximum outage | -| | time (seconds) of the specified Openstack command request. | -+--------------+--------------------------------------------------------------+ -|test tool | Developed by the project. Please see folder: | -| | "yardstick/benchmark/scenarios/availability/ha_tools" | -| | | -+--------------+--------------------------------------------------------------+ -|references | ETSI NFV REL001 | -| | | -+--------------+--------------------------------------------------------------+ -|configuration | This test case needs two configuration files: | -| | 1) test case file: opnfv_yardstick_tc050.yaml | -| | -Attackers: see above "attackers" discription | -| | -waiting_time: which is the time (seconds) from the process | -| | being killed to stoping monitors the monitors | -| | -Monitors: see above "monitors" discription | -| | -SLA: see above "metrics" discription | -| | | -| | 2)POD file: pod.yaml | -| | The POD configuration should record on pod.yaml first. | -| | the "host" item in this test case will use the node name in | -| | the pod.yaml. | -| | | -+--------------+--------------------------------------------------------------+ -|test sequence | description and expected result | -| | | -+--------------+--------------------------------------------------------------+ -|step 1 | start monitors: | -| | each monitor will run with independently process | -| | | -| | Result: The monitor info will be collected. | -| | | -+--------------+--------------------------------------------------------------+ -|step 2 | do attacker: connect the host through SSH, and then execute | -| | the turnoff network interface script with param value | -| | specified by "interface". | -| | | -| | Result: Network interfaces will be turned down. | -| | | -+--------------+--------------------------------------------------------------+ -|step 3 | stop monitors after a period of time specified by | -| | "waiting_time" | -| | | -| | Result: The monitor info will be aggregated. | -| | | -+--------------+--------------------------------------------------------------+ -|step 4 | verify the SLA | -| | | -| | Result: The test case is passed or not. | -| | | -+--------------+--------------------------------------------------------------+ -|post-action | It is the action when the test cases exist. It turns up the | -| | network interface of the control node if it is not turned | -| | up. | -+--------------+--------------------------------------------------------------+ -|test verdict | Fails only if SLA is not passed, or if there is a test case | -| | execution problem. | -| | | -+--------------+--------------------------------------------------------------+ diff --git a/docs/userguide/opnfv_yardstick_tc051.rst b/docs/userguide/opnfv_yardstick_tc051.rst deleted file mode 100644 index 3402ccd92..000000000 --- a/docs/userguide/opnfv_yardstick_tc051.rst +++ /dev/null @@ -1,117 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Yin Kanglin and others. -.. 14_ykl@tongji.edu.cn - -************************************* -Yardstick Test Case Description TC051 -************************************* - -+-----------------------------------------------------------------------------+ -|OpenStack Controller Node CPU Overload High Availability | -| | -+--------------+--------------------------------------------------------------+ -|test case id | OPNFV_YARDSTICK_TC051: OpenStack Controller Node CPU | -| | Overload High Availability | -+--------------+--------------------------------------------------------------+ -|test purpose | This test case will verify the high availability of control | -| | node. When the CPU usage of a specified controller node is | -| | stressed to 100%, which breaks down the Openstack services | -| | on this node. These Openstack service should able to be | -| | accessed by other controller nodes, and the services on | -| | failed controller node should be isolated. | -+--------------+--------------------------------------------------------------+ -|test method | This test case stresses the CPU uasge of a specified control | -| | node to 100%, then checks whether all services provided by | -| | the environment are OK with some monitor tools. | -+--------------+--------------------------------------------------------------+ -|attackers | In this test case, an attacker called "stress-cpu" is | -| | needed. This attacker includes two parameters: | -| | 1) fault_type: which is used for finding the attacker's | -| | scripts. It should be always set to "stress-cpu" in | -| | this test case. | -| | 2) host: which is the name of a control node being attacked. | -| | e.g. | -| | -fault_type: "stress-cpu" | -| | -host: node1 | -+--------------+--------------------------------------------------------------+ -|monitors | In this test case, the monitor named "openstack-cmd" is | -| | needed. The monitor needs needs two parameters: | -| | 1) monitor_type: which is used for finding the monitor class | -| | and related scritps. It should be always set to | -| | "openstack-cmd" for this monitor. | -| | 2) command_name: which is the command name used for request | -| | | -| | There are four instance of the "openstack-cmd" monitor: | -| | monitor1: | -| | -monitor_type: "openstack-cmd" | -| | -command_name: "nova image-list" | -| | monitor2: | -| | -monitor_type: "openstack-cmd" | -| | -command_name: "neutron router-list" | -| | monitor3: | -| | -monitor_type: "openstack-cmd" | -| | -command_name: "heat stack-list" | -| | monitor4: | -| | -monitor_type: "openstack-cmd" | -| | -command_name: "cinder list" | -+--------------+--------------------------------------------------------------+ -|metrics | In this test case, there is one metric: | -| | 1)service_outage_time: which indicates the maximum outage | -| | time (seconds) of the specified Openstack command request. | -+--------------+--------------------------------------------------------------+ -|test tool | Developed by the project. Please see folder: | -| | "yardstick/benchmark/scenarios/availability/ha_tools" | -| | | -+--------------+--------------------------------------------------------------+ -|references | ETSI NFV REL001 | -| | | -+--------------+--------------------------------------------------------------+ -|configuration | This test case needs two configuration files: | -| | 1) test case file: opnfv_yardstick_tc051.yaml | -| | -Attackers: see above "attackers" discription | -| | -waiting_time: which is the time (seconds) from the process | -| | being killed to stoping monitors the monitors | -| | -Monitors: see above "monitors" discription | -| | -SLA: see above "metrics" discription | -| | | -| | 2)POD file: pod.yaml | -| | The POD configuration should record on pod.yaml first. | -| | the "host" item in this test case will use the node name in | -| | the pod.yaml. | -| | | -+--------------+--------------------------------------------------------------+ -|test sequence | description and expected result | -| | | -+--------------+--------------------------------------------------------------+ -|step 1 | start monitors: | -| | each monitor will run with independently process | -| | | -| | Result: The monitor info will be collected. | -| | | -+--------------+--------------------------------------------------------------+ -|step 2 | do attacker: connect the host through SSH, and then execute | -| | the stress cpu script on the host. | -| | | -| | Result: The CPU usage of the host will be stressed to 100%. | -| | | -+--------------+--------------------------------------------------------------+ -|step 3 | stop monitors after a period of time specified by | -| | "waiting_time" | -| | | -| | Result: The monitor info will be aggregated. | -| | | -+--------------+--------------------------------------------------------------+ -|step 4 | verify the SLA | -| | | -| | Result: The test case is passed or not. | -| | | -+--------------+--------------------------------------------------------------+ -|post-action | It is the action when the test cases exist. It kills the | -| | process that stresses the CPU usage. | -+--------------+--------------------------------------------------------------+ -|test verdict | Fails only if SLA is not passed, or if there is a test case | -| | execution problem. | -| | | -+--------------+--------------------------------------------------------------+ diff --git a/docs/userguide/opnfv_yardstick_tc052.rst b/docs/userguide/opnfv_yardstick_tc052.rst deleted file mode 100644 index 9514b6819..000000000 --- a/docs/userguide/opnfv_yardstick_tc052.rst +++ /dev/null @@ -1,141 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Yin Kanglin and others. -.. 14_ykl@tongji.edu.cn - -************************************* -Yardstick Test Case Description TC052 -************************************* - -+-----------------------------------------------------------------------------+ -|OpenStack Controller Node Disk I/O Block High Availability | -| | -+--------------+--------------------------------------------------------------+ -|test case id | OPNFV_YARDSTICK_TC052: OpenStack Controller Node Disk I/O | -| | Block High Availability | -+--------------+--------------------------------------------------------------+ -|test purpose | This test case will verify the high availability of control | -| | node. When the disk I/O of a specified disk is blocked, | -| | which breaks down the Openstack services on this node. Read | -| | and write services should still be accessed by other | -| | controller nodes, and the services on failed controller node | -| | should be isolated. | -+--------------+--------------------------------------------------------------+ -|test method | This test case blocks the disk I/O of a specified control | -| | node, then checks whether the services that need to read or | -| | wirte the disk of the control node are OK with some monitor | -| | tools. | -+--------------+--------------------------------------------------------------+ -|attackers | In this test case, an attacker called "disk-block" is | -| | needed. This attacker includes two parameters: | -| | 1) fault_type: which is used for finding the attacker's | -| | scripts. It should be always set to "disk-block" in this | -| | test case. | -| | 2) host: which is the name of a control node being attacked. | -| | e.g. | -| | -fault_type: "disk-block" | -| | -host: node1 | -+--------------+--------------------------------------------------------------+ -|monitors | In this test case, two kinds of monitor are needed: | -| | 1. the "openstack-cmd" monitor constantly request a specific | -| | Openstack command, which needs two parameters: | -| | 1) monitor_type: which is used for finding the monitor class | -| | and related scripts. It should be always set to | -| | "openstack-cmd" for this monitor. | -| | 2) command_name: which is the command name used for request. | -| | | -| | e.g. | -| | -monitor_type: "openstack-cmd" | -| | -command_name: "nova flavor-list" | -| | | -| | 2. the second monitor verifies the read and write function | -| | by a "operation" and a "result checker". | -| | the "operation" have two parameters: | -| | 1) operation_type: which is used for finding the operation | -| | class and related scripts. | -| | 2) action_parameter: parameters for the operation. | -| | the "result checker" have three parameters: | -| | 1) checker_type: which is used for finding the reuslt | -| | checker class and realted scripts. | -| | 2) expectedValue: the expected value for the output of the | -| | checker script. | -| | 3) condition: whether the expected value is in the output of | -| | checker script or is totally same with the output. | -| | | -| | In this case, the "operation" adds a flavor and the "result | -| | checker" checks whether ths flavor is created. Their | -| | parameters show as follows: | -| | operation: | -| | -operation_type: "nova-create-flavor" | -| | -action_parameter: | -| | flavorconfig: "test-001 test-001 100 1 1" | -| | result checker: | -| | -checker_type: "check-flavor" | -| | -expectedValue: "test-001" | -| | -condition: "in" | -+--------------+--------------------------------------------------------------+ -|metrics | In this test case, there is one metric: | -| | 1)service_outage_time: which indicates the maximum outage | -| | time (seconds) of the specified Openstack command request. | -+--------------+--------------------------------------------------------------+ -|test tool | Developed by the project. Please see folder: | -| | "yardstick/benchmark/scenarios/availability/ha_tools" | -| | | -+--------------+--------------------------------------------------------------+ -|references | ETSI NFV REL001 | -| | | -+--------------+--------------------------------------------------------------+ -|configuration | This test case needs two configuration files: | -| | 1) test case file: opnfv_yardstick_tc052.yaml | -| | -Attackers: see above "attackers" discription | -| | -waiting_time: which is the time (seconds) from the process | -| | being killed to stoping monitors the monitors | -| | -Monitors: see above "monitors" discription | -| | -SLA: see above "metrics" discription | -| | | -| | 2)POD file: pod.yaml | -| | The POD configuration should record on pod.yaml first. | -| | the "host" item in this test case will use the node name in | -| | the pod.yaml. | -| | | -+--------------+--------------------------------------------------------------+ -|test sequence | description and expected result | -| | | -+--------------+--------------------------------------------------------------+ -|step 1 | do attacker: connect the host through SSH, and then execute | -| | the block disk I/O script on the host. | -| | | -| | Result: The disk I/O of the host will be blocked | -| | | -+--------------+--------------------------------------------------------------+ -|step 2 | start monitors: | -| | each monitor will run with independently process | -| | | -| | Result: The monitor info will be collected. | -| | | -+--------------+--------------------------------------------------------------+ -|step 3 | do operation: add a flavor | -| | | -+--------------+--------------------------------------------------------------+ -|step 4 | do result checker: check whether the falvor is created | -| | | -+--------------+--------------------------------------------------------------+ -|step 5 | stop monitors after a period of time specified by | -| | "waiting_time" | -| | | -| | Result: The monitor info will be aggregated. | -| | | -+--------------+--------------------------------------------------------------+ -|step 6 | verify the SLA | -| | | -| | Result: The test case is passed or not. | -| | | -+--------------+--------------------------------------------------------------+ -|post-action | It is the action when the test cases exist. It excutes the | -| | release disk I/O script to release the blocked I/O. | -+--------------+--------------------------------------------------------------+ -|test verdict | Fails if monnitor SLA is not passed or the result checker is | -| | not passed, or if there is a test case execution problem. | -| | | -+--------------+--------------------------------------------------------------+ diff --git a/docs/userguide/opnfv_yardstick_tc053.rst b/docs/userguide/opnfv_yardstick_tc053.rst deleted file mode 100644 index 3c6bbc628..000000000 --- a/docs/userguide/opnfv_yardstick_tc053.rst +++ /dev/null @@ -1,142 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Yin Kanglin and others. -.. 14_ykl@tongji.edu.cn - -************************************* -Yardstick Test Case Description TC053 -************************************* - -+-----------------------------------------------------------------------------+ -|OpenStack Controller Load Balance Service High Availability | -| | -+--------------+--------------------------------------------------------------+ -|test case id | OPNFV_YARDSTICK_TC053: OpenStack Controller Load Balance | -| | Service High Availability | -+--------------+--------------------------------------------------------------+ -|test purpose | This test case will verify the high availability of the | -| | load balance service(current is HAProxy) that supports | -| | OpenStack on controller node. When the load balance service | -| | of a specified controller node is killed, whether other load | -| | balancers on other controller nodes will work, and whether | -| | the controller node will restart the load balancer are | -| | checked. | -+--------------+--------------------------------------------------------------+ -|test method | This test case kills the processes of load balance service | -| | on a selected control node, then checks whether the request | -| | of the related Openstack command is OK and the killed | -| | processes are recovered. | -+--------------+--------------------------------------------------------------+ -|attackers | In this test case, an attacker called "kill-process" is | -| | needed. This attacker includes three parameters: | -| | 1) fault_type: which is used for finding the attacker's | -| | scripts. It should be always set to "kill-process" in this | -| | test case. | -| | 2) process_name: which is the process name of the specified | -| | OpenStack service. If there are multiple processes use the | -| | same name on the host, all of them are killed by this | -| | attacker. | -| | In this case. This parameter should always set to "swift- | -| | proxy". | -| | 3) host: which is the name of a control node being attacked. | -| | | -| | e.g. | -| | -fault_type: "kill-process" | -| | -process_name: "haproxy" | -| | -host: node1 | -| | | -+--------------+--------------------------------------------------------------+ -|monitors | In this test case, two kinds of monitor are needed: | -| | 1. the "openstack-cmd" monitor constantly request a specific | -| | Openstack command, which needs two parameters: | -| | 1) monitor_type: which is used for finding the monitor class | -| | and related scritps. It should be always set to | -| | "openstack-cmd" for this monitor. | -| | 2) command_name: which is the command name used for request. | -| | | -| | 2. the "process" monitor check whether a process is running | -| | on a specific node, which needs three parameters: | -| | 1) monitor_type: which used for finding the monitor class | -| | and related scripts. It should be always set to "process" | -| | for this monitor. | -| | 2) process_name: which is the process name for monitor | -| | 3) host: which is the name of the node runing the process | -| | In this case, the command_name of monitor1 should be | -| | services that is supported by load balancer and the process- | -| | name of monitor2 should be "haproxy", for example: | -| | | -| | e.g. | -| | monitor1: | -| | -monitor_type: "openstack-cmd" | -| | -command_name: "nova image-list" | -| | monitor2: | -| | -monitor_type: "process" | -| | -process_name: "haproxy" | -| | -host: node1 | -| | | -+--------------+--------------------------------------------------------------+ -|metrics | In this test case, there are two metrics: | -| | 1)service_outage_time: which indicates the maximum outage | -| | time (seconds) of the specified Openstack command request. | -| | 2)process_recover_time: which indicates the maximun time | -| | (seconds) from the process being killed to recovered | -| | | -+--------------+--------------------------------------------------------------+ -|test tool | Developed by the project. Please see folder: | -| | "yardstick/benchmark/scenarios/availability/ha_tools" | -| | | -+--------------+--------------------------------------------------------------+ -|references | ETSI NFV REL001 | -| | | -+--------------+--------------------------------------------------------------+ -|configuration | This test case needs two configuration files: | -| | 1) test case file: opnfv_yardstick_tc053.yaml | -| | -Attackers: see above "attackers" discription | -| | -waiting_time: which is the time (seconds) from the process | -| | being killed to stoping monitors the monitors | -| | -Monitors: see above "monitors" discription | -| | -SLA: see above "metrics" discription | -| | | -| | 2)POD file: pod.yaml | -| | The POD configuration should record on pod.yaml first. | -| | the "host" item in this test case will use the node name in | -| | the pod.yaml. | -| | | -+--------------+--------------------------------------------------------------+ -|test sequence | description and expected result | -| | | -+--------------+--------------------------------------------------------------+ -|step 1 | start monitors: | -| | each monitor will run with independently process | -| | | -| | Result: The monitor info will be collected. | -| | | -+--------------+--------------------------------------------------------------+ -|step 2 | do attacker: connect the host through SSH, and then execute | -| | the kill process script with param value specified by | -| | "process_name" | -| | | -| | Result: Process will be killed. | -| | | -+--------------+--------------------------------------------------------------+ -|step 3 | stop monitors after a period of time specified by | -| | "waiting_time" | -| | | -| | Result: The monitor info will be aggregated. | -| | | -+--------------+--------------------------------------------------------------+ -|step 4 | verify the SLA | -| | | -| | Result: The test case is passed or not. | -| | | -+--------------+--------------------------------------------------------------+ -|post-action | It is the action when the test cases exist. It will check | -| | the status of the specified process on the host, and restart | -| | the process if it is not running for next test cases. | -| | | -+--------------+--------------------------------------------------------------+ -|test verdict | Fails only if SLA is not passed, or if there is a test case | -| | execution problem. | -| | | -+--------------+--------------------------------------------------------------+ diff --git a/docs/userguide/opnfv_yardstick_tc054.rst b/docs/userguide/opnfv_yardstick_tc054.rst deleted file mode 100644 index 7f92be2bc..000000000 --- a/docs/userguide/opnfv_yardstick_tc054.rst +++ /dev/null @@ -1,125 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Yin Kanglin and others. -.. 14_ykl@tongji.edu.cn - -************************************* -Yardstick Test Case Description TC054 -************************************* - -+-----------------------------------------------------------------------------+ -|OpenStack Virtual IP High Availability | -| | -+--------------+--------------------------------------------------------------+ -|test case id | OPNFV_YARDSTICK_TC054: OpenStack Virtual IP High | -| | Availability | -+--------------+--------------------------------------------------------------+ -|test purpose | This test case will verify the high availability for virtual | -| | ip in the environment. When master node of virtual ip is | -| | abnormally shutdown, connection to virtual ip and | -| | the services binded to the virtual IP it should be OK. | -+--------------+--------------------------------------------------------------+ -|test method | This test case shutdowns the virtual IP master node with | -| | some fault injection tools, then checks whether virtual ips | -| | can be pinged and services binded to virtual ip are OK with | -| | some monitor tools. | -+--------------+--------------------------------------------------------------+ -|attackers | In this test case, an attacker called "control-shutdown" is | -| | needed. This attacker includes two parameters: | -| | 1) fault_type: which is used for finding the attacker's | -| | scripts. It should be always set to "control-shutdown" in | -| | this test case. | -| | 2) host: which is the name of a control node being attacked. | -| | | -| | In this case the host should be the virtual ip master node, | -| | that means the host ip is the virtual ip, for exapmle: | -| | -fault_type: "control-shutdown" | -| | -host: node1(the VIP Master node) | -+--------------+--------------------------------------------------------------+ -|monitors | In this test case, two kinds of monitor are needed: | -| | 1. the "ip_status" monitor that pings a specific ip to check | -| | the connectivity of this ip, which needs two parameters: | -| | 1) monitor_type: which is used for finding the monitor class | -| | and related scripts. It should be always set to "ip_status" | -| | for this monitor. | -| | 2) ip_address: The ip to be pinged. In this case, ip_address | -| | should be the virtual IP. | -| | | -| | 2. the "openstack-cmd" monitor constantly request a specific | -| | Openstack command, which needs two parameters: | -| | 1) monitor_type: which is used for finding the monitor class | -| | and related scripts. It should be always set to | -| | "openstack-cmd" for this monitor. | -| | 2) command_name: which is the command name used for request. | -| | | -| | e.g. | -| | monitor1: | -| | -monitor_type: "ip_status" | -| | -host: 192.168.0.2 | -| | monitor2: | -| | -monitor_type: "openstack-cmd" | -| | -command_name: "nova image-list" | -| | | -+--------------+--------------------------------------------------------------+ -|metrics | In this test case, there are two metrics: | -| | 1) ping_outage_time: which-indicates the maximum outage time | -| | to ping the specified host. | -| | 2)service_outage_time: which indicates the maximum outage | -| | time (seconds) of the specified Openstack command request. | -+--------------+--------------------------------------------------------------+ -|test tool | Developed by the project. Please see folder: | -| | "yardstick/benchmark/scenarios/availability/ha_tools" | -| | | -+--------------+--------------------------------------------------------------+ -|references | ETSI NFV REL001 | -| | | -+--------------+--------------------------------------------------------------+ -|configuration | This test case needs two configuration files: | -| | 1) test case file: opnfv_yardstick_tc054.yaml | -| | -Attackers: see above "attackers" discription | -| | -waiting_time: which is the time (seconds) from the process | -| | being killed to stoping monitors the monitors | -| | -Monitors: see above "monitors" discription | -| | -SLA: see above "metrics" discription | -| | | -| | 2)POD file: pod.yaml | -| | The POD configuration should record on pod.yaml first. | -| | the "host" item in this test case will use the node name in | -| | the pod.yaml. | -| | | -+--------------+--------------------------------------------------------------+ -|test sequence | description and expected result | -| | | -+--------------+--------------------------------------------------------------+ -|step 1 | start monitors: | -| | each monitor will run with independently process | -| | | -| | Result: The monitor info will be collected. | -| | | -+--------------+--------------------------------------------------------------+ -|step 2 | do attacker: connect the host through SSH, and then execute | -| | the shutdown script on the VIP master node. | -| | | -| | Result: VIP master node will be shutdown | -| | | -+--------------+--------------------------------------------------------------+ -|step 3 | stop monitors after a period of time specified by | -| | "waiting_time" | -| | | -| | Result: The monitor info will be aggregated. | -| | | -+--------------+--------------------------------------------------------------+ -|step 4 | verify the SLA | -| | | -| | Result: The test case is passed or not. | -| | | -+--------------+--------------------------------------------------------------+ -|post-action | It is the action when the test cases exist. It restarts the | -| | original VIP master node if it is not restarted. | -| | | -+--------------+--------------------------------------------------------------+ -|test verdict | Fails only if SLA is not passed, or if there is a test case | -| | execution problem. | -| | | -+--------------+--------------------------------------------------------------+ diff --git a/docs/userguide/opnfv_yardstick_tc055.rst b/docs/userguide/opnfv_yardstick_tc055.rst deleted file mode 100644 index c861ca90c..000000000 --- a/docs/userguide/opnfv_yardstick_tc055.rst +++ /dev/null @@ -1,67 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Huawei Technologies Co.,Ltd and others. - -************************************* -Yardstick Test Case Description TC055 -************************************* - -.. _/proc/cpuinfo: http://www.linfo.org/proc_cpuinfo.html - -+-----------------------------------------------------------------------------+ -|Compute Capacity | -| | -+--------------+--------------------------------------------------------------+ -|test case id | OPNFV_YARDSTICK_TC055_Compute Capacity | -| | | -+--------------+--------------------------------------------------------------+ -|metric | Number of cpus, number of cores, number of threads, available| -| | memory size and total cache size. | -| | | -+--------------+--------------------------------------------------------------+ -|test purpose | To evaluate the IaaS compute capacity with regards to | -| | hardware specification, including number of cpus, number of | -| | cores, number of threads, available memory size and total | -| | cache size. | -| | Test results, graphs and similar shall be stored for | -| | comparison reasons and product evolution understanding | -| | between different OPNFV versions and/or configurations. | -| | | -+--------------+--------------------------------------------------------------+ -|configuration | file: opnfv_yardstick_tc055.yaml | -| | | -| | There is are no additional configurations to be set for this | -| | TC. | -| | | -+--------------+--------------------------------------------------------------+ -|test tool | /proc/cpuinfo | -| | | -| | this TC uses /proc/cpuinfo as source to produce compute | -| | capacity output. | -| | | -+--------------+--------------------------------------------------------------+ -|references | /proc/cpuinfo_ | -| | | -| | ETSI-NFV-TST001 | -| | | -+--------------+--------------------------------------------------------------+ -|applicability | None. | -| | | -+--------------+--------------------------------------------------------------+ -|pre-test | No POD specific requirements have been identified. | -|conditions | | -| | | -+--------------+--------------------------------------------------------------+ -|test sequence | description and expected result | -| | | -+--------------+--------------------------------------------------------------+ -|step 1 | The hosts are installed, TC is invoked and logs are produced | -| | and stored. | -| | | -| | Result: Logs are stored. | -| | | -+--------------+--------------------------------------------------------------+ -|test verdict | None. Hardware specification are fetched and stored. | -| | | -+--------------+--------------------------------------------------------------+ diff --git a/docs/userguide/opnfv_yardstick_tc061.rst b/docs/userguide/opnfv_yardstick_tc061.rst deleted file mode 100644 index 1d424414e..000000000 --- a/docs/userguide/opnfv_yardstick_tc061.rst +++ /dev/null @@ -1,88 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Huawei Technologies Co.,Ltd and others. - -************************************* -Yardstick Test Case Description TC061 -************************************* - -.. _man-pages: http://linux.die.net/man/1/sar - -+-----------------------------------------------------------------------------+ -|Network Utilization | -| | -+--------------+--------------------------------------------------------------+ -|test case id | OPNFV_YARDSTICK_TC061_Network Utilization | -| | | -+--------------+--------------------------------------------------------------+ -|metric | Network utilization | -| | | -+--------------+--------------------------------------------------------------+ -|test purpose | To evaluate the IaaS network capability with regards to | -| | network utilization, including Total number of packets | -| | received per second, Total number of packets transmitted per | -| | second, Total number of kilobytes received per second, Total | -| | number of kilobytes transmitted per second, Number of | -| | compressed packets received per second (for cslip etc.), | -| | Number of compressed packets transmitted per second, Number | -| | of multicast packets received per second, Utilization | -| | percentage of the network interface. | -| | This test case should be run in parallel to other Yardstick | -| | test cases and not run as a stand-alone test case. | -| | Measure the network usage statistics from the network devices| -| | Average, minimum and maximun values are obtained. | -| | The purpose is also to be able to spot trends. | -| | Test results, graphs and similar shall be stored for | -| | comparison reasons and product evolution understanding | -| | between different OPNFV versions and/or configurations. | -| | | -+--------------+--------------------------------------------------------------+ -|configuration | File: netutilization.yaml (in the 'samples' directory) | -| | | -| | * interval: 1 - repeat, pausing every 1 seconds in-between. | -| | * count: 1 - display statistics 1 times, then exit. | -| | | -+--------------+--------------------------------------------------------------+ -|test tool | sar | -| | | -| | The sar command writes to standard output the contents of | -| | selected cumulative activity counters in the operating | -| | system. | -| | sar is normally part of a Linux distribution, hence it | -| | doesn't needs to be installed. | -| | | -+--------------+--------------------------------------------------------------+ -|references | man-pages_ | -| | | -| | ETSI-NFV-TST001 | -| | | -+--------------+--------------------------------------------------------------+ -|applicability | Test can be configured with different: | -| | | -| | * interval; | -| | * count; | -| | * runner Iteration and intervals. | -| | | -| | There are default values for each above-mentioned option. | -| | Run in background with other test cases. | -| | | -+--------------+--------------------------------------------------------------+ -|pre-test | The test case image needs to be installed into Glance | -|conditions | with sar included in the image. | -| | | -| | No POD specific requirements have been identified. | -| | | -+--------------+--------------------------------------------------------------+ -|test sequence | description and expected result. | -| | | -+--------------+--------------------------------------------------------------+ -|step 1 | The host is installed as client. The related TC, or TCs, is | -| | invoked and sar logs are produced and stored. | -| | | -| | Result: logs are stored. | -| | | -+--------------+--------------------------------------------------------------+ -|test verdict | None. Network utilization results are fetched and stored. | -| | | -+--------------+--------------------------------------------------------------+ diff --git a/docs/userguide/opnfv_yardstick_tc063.rst b/docs/userguide/opnfv_yardstick_tc063.rst deleted file mode 100644 index a77653aa5..000000000 --- a/docs/userguide/opnfv_yardstick_tc063.rst +++ /dev/null @@ -1,81 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Huawei Technologies Co.,Ltd and others. - -************************************* -Yardstick Test Case Description TC063 -************************************* - -.. _iostat: http://linux.die.net/man/1/iostat -.. _fdisk: http://www.tldp.org/HOWTO/Partition/fdisk_partitioning.html - -+-----------------------------------------------------------------------------+ -|Storage Capacity | -| | -+--------------+--------------------------------------------------------------+ -|test case id | OPNFV_YARDSTICK_TC063_Storage Capacity | -| | | -+--------------+--------------------------------------------------------------+ -|metric | Storage/disk size, block size | -| | Disk Utilization | -+--------------+--------------------------------------------------------------+ -|test purpose | This test case will check the parameters which could decide | -| | several models and each model has its specified task to | -| | measure. The test purposes are to measure disk size, block | -| | size and disk utilization. With the test results, we could | -| | evaluate the storage capacity of the host. | -| | | -+--------------+--------------------------------------------------------------+ -|configuration | file: opnfv_yardstick_tc063.yaml | -| | | -| |* test_type: "disk_size" | -| |* runner: | -| | type: Iteration | -| | iterations: 1 - test is run 1 time iteratively. | -| | | -+--------------+--------------------------------------------------------------+ -|test tool | fdisk | -| | A command-line utility that provides disk partitioning | -| | functions | -| | | -| | iostat | -| | This is a computer system monitor tool used to collect and | -| | show operating system storage input and output statistics. | -+--------------+--------------------------------------------------------------+ -|references | iostat_ | -| | fdisk_ | -| | | -| | ETSI-NFV-TST001 | -| | | -+--------------+--------------------------------------------------------------+ -|applicability | Test can be configured with different: | -| | | -| | * test_type: "disk size", "block size", "disk utilization" | -| | * interval: 1 - how ofter to stat disk utilization | -| | type: int | -| | unit: seconds | -| | * count: 15 - how many times to stat disk utilization | -| | type: int | -| | unit: na | -| | There are default values for each above-mentioned option. | -| | Run in background with other test cases. | -| | | -+--------------+--------------------------------------------------------------+ -|pre-test | The test case image needs to be installed into Glance | -|conditions | | -| | No POD specific requirements have been identified. | -| | | -+--------------+--------------------------------------------------------------+ -|test sequence | Output the specific storage capacity of disk information as | -| | the sequence into file. | -| | | -+--------------+--------------------------------------------------------------+ -|step 1 | The pod is available and the hosts are installed. Node5 is | -| | used and logs are produced and stored. | -| | | -| | Result: Logs are stored. | -| | | -+--------------+--------------------------------------------------------------+ -|test verdict | None. | -+--------------+--------------------------------------------------------------+ diff --git a/docs/userguide/opnfv_yardstick_tc069.rst b/docs/userguide/opnfv_yardstick_tc069.rst deleted file mode 100644 index af0e64fbf..000000000 --- a/docs/userguide/opnfv_yardstick_tc069.rst +++ /dev/null @@ -1,100 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Huawei Technologies Co.,Ltd and others. - -************************************* -Yardstick Test Case Description TC069 -************************************* - -.. _RAMspeed: http://alasir.com/software/ramspeed/ - -.. table:: - :class: longtable - -+-----------------------------------------------------------------------------+ -|Memory Bandwidth | -| | -+--------------+--------------------------------------------------------------+ -|test case id | OPNFV_YARDSTICK_TC069_Memory Bandwidth | -| | | -+--------------+--------------------------------------------------------------+ -|metric | Megabyte per second (MBps) | -| | | -+--------------+--------------------------------------------------------------+ -|test purpose | To evaluate the IaaS compute performance with regards to | -| | memory bandwidth. | -| | Measure the maximum possible cache and memory performance | -| | while reading and writing certain blocks of data (starting | -| | from 1Kb and further in power of 2) continuously through ALU | -| | and FPU respectively. | -| | Measure different aspects of memory performance via | -| | synthetic simulations. Each simulation consists of four | -| | performances (Copy, Scale, Add, Triad). | -| | Test results, graphs and similar shall be stored for | -| | comparison reasons and product evolution understanding | -| | between different OPNFV versions and/or configurations. | -| | | -+--------------+--------------------------------------------------------------+ -|configuration | File: opnfv_yardstick_tc069.yaml | -| | | -| | * SLA (optional): 7000 (MBps) min_bandwidth: The minimum | -| | amount of memory bandwidth that is accepted. | -| | * type_id: 1 - runs a specified benchmark | -| | (by an ID number): | -| | 1 -- INTmark [writing] 4 -- FLOATmark [writing] | -| | 2 -- INTmark [reading] 5 -- FLOATmark [reading] | -| | 3 -- INTmem 6 -- FLOATmem | -| | * block_size: 64 Megabytes - the maximum block | -| | size per array. | -| | * load: 32 Gigabytes - the amount of data load per pass. | -| | * iterations: 5 - test is run 5 times iteratively. | -| | * interval: 1 - there is 1 second delay between each | -| | iteration. | -| | | -+--------------+--------------------------------------------------------------+ -|test tool | RAMspeed | -| | | -| | RAMspeed is a free open source command line utility to | -| | measure cache and memory performance of computer systems. | -| | RAMspeed is not always part of a Linux distribution, hence | -| | it needs to be installed in the test image. | -| | | -+--------------+--------------------------------------------------------------+ -|references | RAMspeed_ | -| | | -| | ETSI-NFV-TST001 | -| | | -+--------------+--------------------------------------------------------------+ -|applicability | Test can be configured with different: | -| | | -| | * benchmark operations (such as INTmark [writing], | -| | INTmark [reading], FLOATmark [writing], | -| | FLOATmark [reading], INTmem, FLOATmem); | -| | * block size per array; | -| | * load per pass; | -| | * number of batch run iterations; | -| | * iterations and intervals. | -| | | -| | There are default values for each above-mentioned option. | -| | | -+--------------+--------------------------------------------------------------+ -|pre-test | The test case image needs to be installed into Glance | -|conditions | with RAmspeed included in the image. | -| | | -| | No POD specific requirements have been identified. | -| | | -+--------------+--------------------------------------------------------------+ -|test sequence | description and expected result | -| | | -+--------------+--------------------------------------------------------------+ -|step 1 | The host is installed as client. RAMspeed is invoked and | -| | logs are produced and stored. | -| | | -| | Result: logs are stored. | -| | | -+--------------+--------------------------------------------------------------+ -|test verdict | Test fails if the measured memory bandwidth is below the SLA | -| | value or if there is a test case execution problem. | -| | | -+--------------+--------------------------------------------------------------+ diff --git a/docs/userguide/opnfv_yardstick_tc070.rst b/docs/userguide/opnfv_yardstick_tc070.rst deleted file mode 100644 index 64fcc0c91..000000000 --- a/docs/userguide/opnfv_yardstick_tc070.rst +++ /dev/null @@ -1,110 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Huawei Technologies Co.,Ltd and others. - -************************************* -Yardstick Test Case Description TC070 -************************************* - -.. _cirros: https://download.cirros-cloud.net -.. _pktgen: https://www.kernel.org/doc/Documentation/networking/pktgen.txt -.. _free: http://manpages.ubuntu.com/manpages/trusty/en/man1/free.1.html - -+-----------------------------------------------------------------------------+ -|Latency, Memory Utilization, Throughput, Packet Loss | -| | -+--------------+--------------------------------------------------------------+ -|test case id | OPNFV_YARDSTICK_TC070_Latency, Memory Utilization, | -| | Throughput,Packet Loss | -| | | -+--------------+--------------------------------------------------------------+ -|metric | Number of flows, latency, throughput, Memory Utilization, | -| | packet loss | -| | | -+--------------+--------------------------------------------------------------+ -|test purpose | To evaluate the IaaS network performance with regards to | -| | flows and throughput, such as if and how different amounts | -| | of flows matter for the throughput between hosts on different| -| | compute blades. Typically e.g. the performance of a vSwitch | -| | depends on the number of flows running through it. Also | -| | performance of other equipment or entities can depend | -| | on the number of flows or the packet sizes used. | -| | The purpose is also to be able to spot trends. | -| | Test results, graphs and similar shall be stored for | -| | comparison reasons and product evolution understanding | -| | between different OPNFV versions and/or configurations. | -| | | -+--------------+--------------------------------------------------------------+ -|configuration | file: opnfv_yardstick_tc070.yaml | -| | | -| | Packet size: 64 bytes | -| | Number of ports: 1, 10, 50, 100, 300, 500, 750 and 1000. | -| | The amount configured ports map from 2 up to 1001000 flows, | -| | respectively. Each port amount is run two times, for 20 | -| | seconds each. Then the next port_amount is run, and so on. | -| | During the test Memory Utilization on both client and server,| -| | and the network latency between the client and server are | -| | measured. | -| | The client and server are distributed on different HW. | -| | For SLA max_ppm is set to 1000. | -| | | -+--------------+--------------------------------------------------------------+ -|test tool | pktgen | -| | | -| | Pktgen is not always part of a Linux distribution, hence it | -| | needs to be installed. It is part of the Yardstick Glance | -| | image. | -| | (As an example see the /yardstick/tools/ directory for how | -| | to generate a Linux image with pktgen included.) | -| | | -| | ping | -| | | -| | Ping is normally part of any Linux distribution, hence it | -| | doesn't need to be installed. It is also part of the | -| | Yardstick Glance image. | -| | (For example also a cirros_ image can be downloaded, it | -| | includes ping) | -| | | -| | free | -| | | -| | free provides information about unused and used memory and | -| | swap space on any computer running Linux or another Unix-like| -| | operating system. | -| | free is normally part of a Linux distribution, hence it | -| | doesn't needs to be installed. | -| | | -+--------------+--------------------------------------------------------------+ -|references | Ping and free man pages | -| | | -| | pktgen_ | -| | | -| | ETSI-NFV-TST001 | -| | | -+--------------+--------------------------------------------------------------+ -|applicability | Test can be configured with different packet sizes, amount | -| | of flows and test duration. Default values exist. | -| | | -| | SLA (optional): max_ppm: The number of packets per million | -| | packets sent that are acceptable to lose, not received. | -| | | -+--------------+--------------------------------------------------------------+ -|pre-test | The test case image needs to be installed into Glance | -|conditions | with pktgen included in it. | -| | | -| | No POD specific requirements have been identified. | -| | | -+--------------+--------------------------------------------------------------+ -|test sequence | description and expected result | -| | | -+--------------+--------------------------------------------------------------+ -|step 1 | The hosts are installed, as server and client. pktgen is | -| | invoked and logs are produced and stored. | -| | | -| | Result: Logs are stored. | -| | | -+--------------+--------------------------------------------------------------+ -|test verdict | Fails only if SLA is not passed, or if there is a test case | -| | execution problem. | -| | | -+--------------+--------------------------------------------------------------+ diff --git a/docs/userguide/opnfv_yardstick_tc071.rst b/docs/userguide/opnfv_yardstick_tc071.rst deleted file mode 100644 index 673480b55..000000000 --- a/docs/userguide/opnfv_yardstick_tc071.rst +++ /dev/null @@ -1,109 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Huawei Technologies Co.,Ltd and others. - -************************************* -Yardstick Test Case Description TC071 -************************************* - -.. _cirros: https://download.cirros-cloud.net -.. _pktgen: https://www.kernel.org/doc/Documentation/networking/pktgen.txt -.. _cachestat: https://github.com/brendangregg/perf-tools/tree/master/fs - -+-----------------------------------------------------------------------------+ -|Latency, Cache Utilization, Throughput, Packet Loss | -| | -+--------------+--------------------------------------------------------------+ -|test case id | OPNFV_YARDSTICK_TC071_Latency, Cache Utilization, | -| | Throughput,Packet Loss | -| | | -+--------------+--------------------------------------------------------------+ -|metric | Number of flows, latency, throughput, Cache Utilization, | -| | packet loss | -| | | -+--------------+--------------------------------------------------------------+ -|test purpose | To evaluate the IaaS network performance with regards to | -| | flows and throughput, such as if and how different amounts | -| | of flows matter for the throughput between hosts on different| -| | compute blades. Typically e.g. the performance of a vSwitch | -| | depends on the number of flows running through it. Also | -| | performance of other equipment or entities can depend | -| | on the number of flows or the packet sizes used. | -| | The purpose is also to be able to spot trends. | -| | Test results, graphs and similar shall be stored for | -| | comparison reasons and product evolution understanding | -| | between different OPNFV versions and/or configurations. | -| | | -+--------------+--------------------------------------------------------------+ -|configuration | file: opnfv_yardstick_tc071.yaml | -| | | -| | Packet size: 64 bytes | -| | Number of ports: 1, 10, 50, 100, 300, 500, 750 and 1000. | -| | The amount configured ports map from 2 up to 1001000 flows, | -| | respectively. Each port amount is run two times, for 20 | -| | seconds each. Then the next port_amount is run, and so on. | -| | During the test Cache Utilization on both client and server, | -| | and the network latency between the client and server are | -| | measured. | -| | The client and server are distributed on different HW. | -| | For SLA max_ppm is set to 1000. | -| | | -+--------------+--------------------------------------------------------------+ -|test tool | pktgen | -| | | -| | Pktgen is not always part of a Linux distribution, hence it | -| | needs to be installed. It is part of the Yardstick Glance | -| | image. | -| | (As an example see the /yardstick/tools/ directory for how | -| | to generate a Linux image with pktgen included.) | -| | | -| | ping | -| | | -| | Ping is normally part of any Linux distribution, hence it | -| | doesn't need to be installed. It is also part of the | -| | Yardstick Glance image. | -| | (For example also a cirros_ image can be downloaded, it | -| | includes ping) | -| | | -| | cachestat | -| | | -| | cachestat is not always part of a Linux distribution, hence | -| | it needs to be installed. | -| | | -+--------------+--------------------------------------------------------------+ -|references | Ping man pages | -| | | -| | pktgen_ | -| | | -| | cachestat_ | -| | | -| | ETSI-NFV-TST001 | -| | | -+--------------+--------------------------------------------------------------+ -|applicability | Test can be configured with different packet sizes, amount | -| | of flows and test duration. Default values exist. | -| | | -| | SLA (optional): max_ppm: The number of packets per million | -| | packets sent that are acceptable to lose, not received. | -| | | -+--------------+--------------------------------------------------------------+ -|pre-test | The test case image needs to be installed into Glance | -|conditions | with pktgen included in it. | -| | | -| | No POD specific requirements have been identified. | -| | | -+--------------+--------------------------------------------------------------+ -|test sequence | description and expected result | -| | | -+--------------+--------------------------------------------------------------+ -|step 1 | The hosts are installed, as server and client. pktgen is | -| | invoked and logs are produced and stored. | -| | | -| | Result: Logs are stored. | -| | | -+--------------+--------------------------------------------------------------+ -|test verdict | Fails only if SLA is not passed, or if there is a test case | -| | execution problem. | -| | | -+--------------+--------------------------------------------------------------+ diff --git a/docs/userguide/opnfv_yardstick_tc072.rst b/docs/userguide/opnfv_yardstick_tc072.rst deleted file mode 100644 index 2e7ee057c..000000000 --- a/docs/userguide/opnfv_yardstick_tc072.rst +++ /dev/null @@ -1,110 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Huawei Technologies Co.,Ltd and others. - -************************************* -Yardstick Test Case Description TC072 -************************************* - -.. _cirros: https://download.cirros-cloud.net -.. _pktgen: https://www.kernel.org/doc/Documentation/networking/pktgen.txt -.. _sar: http://linux.die.net/man/1/sar - -+-----------------------------------------------------------------------------+ -|Latency, Network Utilization, Throughput, Packet Loss | -| | -+--------------+--------------------------------------------------------------+ -|test case id | OPNFV_YARDSTICK_TC072_Latency, Network Utilization, | -| | Throughput,Packet Loss | -| | | -+--------------+--------------------------------------------------------------+ -|metric | Number of flows, latency, throughput, Network Utilization, | -| | packet loss | -| | | -+--------------+--------------------------------------------------------------+ -|test purpose | To evaluate the IaaS network performance with regards to | -| | flows and throughput, such as if and how different amounts | -| | of flows matter for the throughput between hosts on different| -| | compute blades. Typically e.g. the performance of a vSwitch | -| | depends on the number of flows running through it. Also | -| | performance of other equipment or entities can depend | -| | on the number of flows or the packet sizes used. | -| | The purpose is also to be able to spot trends. | -| | Test results, graphs and similar shall be stored for | -| | comparison reasons and product evolution understanding | -| | between different OPNFV versions and/or configurations. | -| | | -+--------------+--------------------------------------------------------------+ -|configuration | file: opnfv_yardstick_tc072.yaml | -| | | -| | Packet size: 64 bytes | -| | Number of ports: 1, 10, 50, 100, 300, 500, 750 and 1000. | -| | The amount configured ports map from 2 up to 1001000 flows, | -| | respectively. Each port amount is run two times, for 20 | -| | seconds each. Then the next port_amount is run, and so on. | -| | During the test Network Utilization on both client and | -| | server, and the network latency between the client and server| -| | are measured. | -| | The client and server are distributed on different HW. | -| | For SLA max_ppm is set to 1000. | -| | | -+--------------+--------------------------------------------------------------+ -|test tool | pktgen | -| | | -| | Pktgen is not always part of a Linux distribution, hence it | -| | needs to be installed. It is part of the Yardstick Glance | -| | image. | -| | (As an example see the /yardstick/tools/ directory for how | -| | to generate a Linux image with pktgen included.) | -| | | -| | ping | -| | | -| | Ping is normally part of any Linux distribution, hence it | -| | doesn't need to be installed. It is also part of the | -| | Yardstick Glance image. | -| | (For example also a cirros_ image can be downloaded, it | -| | includes ping) | -| | | -| | sar | -| | | -| | The sar command writes to standard output the contents of | -| | selected cumulative activity counters in the operating | -| | system. | -| | sar is normally part of a Linux distribution, hence it | -| | doesn't needs to be installed. | -| | | -+--------------+--------------------------------------------------------------+ -|references | Ping and sar man pages | -| | | -| | pktgen_ | -| | | -| | ETSI-NFV-TST001 | -| | | -+--------------+--------------------------------------------------------------+ -|applicability | Test can be configured with different packet sizes, amount | -| | of flows and test duration. Default values exist. | -| | | -| | SLA (optional): max_ppm: The number of packets per million | -| | packets sent that are acceptable to lose, not received. | -| | | -+--------------+--------------------------------------------------------------+ -|pre-test | The test case image needs to be installed into Glance | -|conditions | with pktgen included in it. | -| | | -| | No POD specific requirements have been identified. | -| | | -+--------------+--------------------------------------------------------------+ -|test sequence | description and expected result | -| | | -+--------------+--------------------------------------------------------------+ -|step 1 | The hosts are installed, as server and client. pktgen is | -| | invoked and logs are produced and stored. | -| | | -| | Result: Logs are stored. | -| | | -+--------------+--------------------------------------------------------------+ -|test verdict | Fails only if SLA is not passed, or if there is a test case | -| | execution problem. | -| | | -+--------------+--------------------------------------------------------------+ diff --git a/docs/userguide/opnfv_yardstick_tc073.rst b/docs/userguide/opnfv_yardstick_tc073.rst deleted file mode 100644 index ad4526405..000000000 --- a/docs/userguide/opnfv_yardstick_tc073.rst +++ /dev/null @@ -1,81 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Huawei Technologies Co.,Ltd and others. - -************************************* -Yardstick Test Case Description TC073 -************************************* - -.. _netperf: http://www.netperf.org/netperf/training/Netperf.html - -+-----------------------------------------------------------------------------+ -|Throughput per NFVI node test | -| | -+--------------+--------------------------------------------------------------+ -|test case id | OPNFV_YARDSTICK_TC073_Network latency and throughput between | -| | nodes | -| | | -+--------------+--------------------------------------------------------------+ -|metric | Network latency and throughput | -| | | -+--------------+--------------------------------------------------------------+ -|test purpose | To evaluate the IaaS network performance with regards to | -| | flows and throughput, such as if and how different amounts | -| | of packet sizes and flows matter for the throughput between | -| | nodes in one pod. | -| | | -+--------------+--------------------------------------------------------------+ -|configuration | file: opnfv_yardstick_tc073.yaml | -| | | -| | Packet size: default 1024 bytes. | -| | | -| | Test length: default 20 seconds. | -| | | -| | The client and server are distributed on different nodes. | -| | | -| | For SLA max_mean_latency is set to 100. | -| | | -+--------------+--------------------------------------------------------------+ -|test tool | netperf_ | -| | Netperf is a software application that provides network | -| | bandwidth testing between two hosts on a network. It | -| | supports Unix domain sockets, TCP, SCTP, DLPI and UDP via | -| | BSD Sockets. Netperf provides a number of predefined tests | -| | e.g. to measure bulk (unidirectional) data transfer or | -| | request response performance. | -| | (netperf is not always part of a Linux distribution, hence | -| | it needs to be installed.) | -| | | -+--------------+--------------------------------------------------------------+ -|references | netperf Man pages | -| | ETSI-NFV-TST001 | -| | | -+--------------+--------------------------------------------------------------+ -|applicability | Test can be configured with different packet sizes and | -| | test duration. Default values exist. | -| | | -| | SLA (optional): max_mean_latency | -| | | -+--------------+--------------------------------------------------------------+ -|pre-test | The POD can be reached by external ip and logged on via ssh | -|conditions | | -+--------------+--------------------------------------------------------------+ -|test sequence | description and expected result | -| | | -+--------------+--------------------------------------------------------------+ -|step 1 | Install netperf tool on each specified node, one is as the | -| | server, and the other as the client. | -| | | -+--------------+--------------------------------------------------------------+ -|step 2 | Log on to the client node and use the netperf command to | -| | execute the network performance test | -| | | -+--------------+--------------------------------------------------------------+ -|step 3 | The throughput results stored. | -| | | -+--------------+--------------------------------------------------------------+ -|test verdict | Fails only if SLA is not passed, or if there is a test case | -| | execution problem. | -| | | -+--------------+--------------------------------------------------------------+ diff --git a/docs/userguide/opnfv_yardstick_tc074.rst b/docs/userguide/opnfv_yardstick_tc074.rst deleted file mode 100644 index 92cd51439..000000000 --- a/docs/userguide/opnfv_yardstick_tc074.rst +++ /dev/null @@ -1,137 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Huawei Technologies Co.,Ltd and others. - -************************************* -Yardstick Test Case Description TC074 -************************************* - -.. _Storperf: https://wiki.opnfv.org/display/storperf/Storperf - -+-----------------------------------------------------------------------------+ -|Storperf | -| | -+--------------+--------------------------------------------------------------+ -|test case id | OPNFV_YARDSTICK_TC074_Storperf | -| | | -+--------------+--------------------------------------------------------------+ -|metric | Storage performance | -| | | -+--------------+--------------------------------------------------------------+ -|test purpose | Storperf integration with yardstick. The purpose of StorPerf | -| | is to provide a tool to measure block and object storage | -| | performance in an NFVI. When complemented with a | -| | characterization of typical VF storage performance | -| | requirements, it can provide pass/fail thresholds for test, | -| | staging, and production NFVI environments. | -| | | -| | The benchmarks developed for block and object storage will | -| | be sufficiently varied to provide a good preview of expected | -| | storage performance behavior for any type of VNF workload. | -| | | -+--------------+--------------------------------------------------------------+ -|configuration | file: opnfv_yardstick_tc074.yaml | -| | | -| | * agent_count: 1 - the number of VMs to be created | -| | * agent_image: "Ubuntu-14.04" - image used for creating VMs | -| | * public_network: "ext-net" - name of public network | -| | * volume_size: 2 - cinder volume size | -| | * block_sizes: "4096" - data block size | -| | * queue_depths: "4" | -| | * StorPerf_ip: "192.168.200.2" | -| | * query_interval: 10 - state query interval | -| | * timeout: 600 - maximum allowed job time | -| | | -+--------------+--------------------------------------------------------------+ -|test tool | Storperf_ | -| | | -| | StorPerf is a tool to measure block and object storage | -| | performance in an NFVI. | -| | | -| | StorPerf is delivered as a Docker container from | -| | https://hub.docker.com/r/opnfv/storperf/tags/. | -| | | -+--------------+--------------------------------------------------------------+ -|references | Storperf_ | -| | | -| | ETSI-NFV-TST001 | -| | | -+--------------+--------------------------------------------------------------+ -|applicability | Test can be configured with different: | -| | | -| | * agent_count | -| | * volume_size | -| | * block_sizes | -| | * queue_depths | -| | * query_interval | -| | * timeout | -| | * target=[device or path] | -| | The path to either an attached storage device | -| | (/dev/vdb, etc) or a directory path (/opt/storperf) that | -| | will be used to execute the performance test. In the case | -| | of a device, the entire device will be used. If not | -| | specified, the current directory will be used. | -| | * workload=[workload module] | -| | If not specified, the default is to run all workloads. The | -| | workload types are: | -| | - rs: 100% Read, sequential data | -| | - ws: 100% Write, sequential data | -| | - rr: 100% Read, random access | -| | - wr: 100% Write, random access | -| | - rw: 70% Read / 30% write, random access | -| | * nossd: Do not perform SSD style preconditioning. | -| | * nowarm: Do not perform a warmup prior to | -| | measurements. | -| | * report= [job_id] | -| | Query the status of the supplied job_id and report on | -| | metrics. If a workload is supplied, will report on only | -| | that subset. | -| | | -| | There are default values for each above-mentioned option. | -| | | -+--------------+--------------------------------------------------------------+ -|pre-test | If you do not have an Ubuntu 14.04 image in Glance, you will | -|conditions | need to add one. A key pair for launching agents is also | -| | required. | -| | | -| | Storperf is required to be installed in the environment. | -| | There are two possible methods for Storperf installation: | -| | Run container on Jump Host | -| | Run container in a VM | -| | | -| | Running StorPerf on Jump Host | -| | Requirements: | -| | - Docker must be installed | -| | - Jump Host must have access to the OpenStack Controller | -| | API | -| | - Jump Host must have internet connectivity for | -| | downloading docker image | -| | - Enough floating IPs must be available to match your | -| | agent count | -| | | -| | Running StorPerf in a VM | -| | Requirements: | -| | - VM has docker installed | -| | - VM has OpenStack Controller credentials and can | -| | communicate with the Controller API | -| | - VM has internet connectivity for downloading the | -| | docker image | -| | - Enough floating IPs must be available to match your | -| | agent count | -| | | -| | No POD specific requirements have been identified. | -| | | -+--------------+--------------------------------------------------------------+ -|test sequence | description and expected result | -| | | -+--------------+--------------------------------------------------------------+ -|step 1 | The Storperf is installed and Ubuntu 14.04 image is stored | -| | in glance. TC is invoked and logs are produced and stored. | -| | | -| | Result: Logs are stored. | -| | | -+--------------+--------------------------------------------------------------+ -|test verdict | None. Storage performance results are fetched and stored. | -| | | -+--------------+--------------------------------------------------------------+ diff --git a/docs/userguide/opnfv_yardstick_tc075.rst b/docs/userguide/opnfv_yardstick_tc075.rst deleted file mode 100644 index a6ff34447..000000000 --- a/docs/userguide/opnfv_yardstick_tc075.rst +++ /dev/null @@ -1,60 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Huawei Technologies Co.,Ltd and others. - -************************************* -Yardstick Test Case Description TC075 -************************************* - - -+-----------------------------------------------------------------------------+ -|Network Capacity and Scale Testing | -| | -+--------------+--------------------------------------------------------------+ -|test case id | OPNFV_YARDSTICK_TC075_Network_Capacity_and_Scale_testing | -| | | -+--------------+--------------------------------------------------------------+ -|metric | Number of connections, Number of frames sent/received | -| | | -+--------------+--------------------------------------------------------------+ -|test purpose | To evaluate the network capacity and scale with regards to | -| | connections and frmaes. | -| | | -+--------------+--------------------------------------------------------------+ -|configuration | file: opnfv_yardstick_tc075.yaml | -| | | -| | There is no additional configuration to be set for this TC. | -| | | -+--------------+--------------------------------------------------------------+ -|test tool | netstar | -| | | -| | Netstat is normally part of any Linux distribution, hence it | -| | doesn't need to be installed. | -| | | -+--------------+--------------------------------------------------------------+ -|references | Netstat man page | -| | | -| | ETSI-NFV-TST001 | -| | | -+--------------+--------------------------------------------------------------+ -|applicability | This test case is mainly for evaluating network performance. | -| | | -+--------------+--------------------------------------------------------------+ -|pre_test | Each pod node must have netstat included in it. | -|conditions | | -| | | -+--------------+--------------------------------------------------------------+ -|test sequence | description and expected result | -| | | -+--------------+--------------------------------------------------------------+ -|step 1 | The pod is available. | -| | Netstat is invoked and logs are produced and stored. | -| | | -| | Result: Logs are stored. | -| | | -+--------------+--------------------------------------------------------------+ -|test verdict | None. Number of connections and frames are fetched and | -| | stored. | -| | | -+--------------+--------------------------------------------------------------+ diff --git a/docs/userguide/opnfv_yardstick_tc076.rst b/docs/userguide/opnfv_yardstick_tc076.rst deleted file mode 100644 index ac7bde794..000000000 --- a/docs/userguide/opnfv_yardstick_tc076.rst +++ /dev/null @@ -1,61 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Huawei Technologies Co.,Ltd and others. - -************************************* -Yardstick Test Case Description TC076 -************************************* - - -+-----------------------------------------------------------------------------+ -|Monitor Network Metrics | -| | -+--------------+--------------------------------------------------------------+ -|test case id | OPNFV_YARDSTICK_TC076_Monitor_Network_Metrics | -| | | -+--------------+--------------------------------------------------------------+ -|metric | IP datagram error rate, ICMP message error rate, | -| | TCP segment error rate and UDP datagram error rate | -| | | -+--------------+--------------------------------------------------------------+ -|test purpose | Monitor network metrics provided by the kernel in a host and | -| | calculate IP datagram error rate, ICMP message error rate, | -| | TCP segment error rate and UDP datagram error rate. | -| | | -+--------------+--------------------------------------------------------------+ -|configuration | file: opnfv_yardstick_tc076.yaml | -| | | -| | There is no additional configuration to be set for this TC. | -| | | -+--------------+--------------------------------------------------------------+ -|test tool | nstat | -| | | -| | nstat is a simple tool to monitor kernel snmp counters and | -| | network interface statistics. | -| | | -+--------------+--------------------------------------------------------------+ -|references | nstat man page | -| | | -| | ETSI-NFV-TST001 | -| | | -+--------------+--------------------------------------------------------------+ -|applicability | This test case is mainly for monitoring network metrics. | -| | | -+--------------+--------------------------------------------------------------+ -|pre_test | | -|conditions | | -| | | -+--------------+--------------------------------------------------------------+ -|test sequence | description and expected result | -| | | -+--------------+--------------------------------------------------------------+ -|step 1 | The pod is available. | -| | Nstat is invoked and logs are produced and stored. | -| | | -| | Result: Logs are stored. | -| | | -+--------------+--------------------------------------------------------------+ -|test verdict | None. | -| | | -+--------------+--------------------------------------------------------------+ diff --git a/docs/userguide/references.rst b/docs/userguide/references.rst deleted file mode 100644 index 05729ba75..000000000 --- a/docs/userguide/references.rst +++ /dev/null @@ -1,60 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Ericsson AB and others. - -========== -References -========== - - -OPNFV -===== - -* Parser wiki: https://wiki.opnfv.org/parser -* Pharos wiki: https://wiki.opnfv.org/pharos -* VTC: https://wiki.opnfv.org/vtc -* Yardstick CI: https://build.opnfv.org/ci/view/yardstick/ -* Yardstick and ETSI TST001 presentation: https://wiki.opnfv.org/display/yardstick/Yardstick?preview=%2F2925202%2F2925205%2Fopnfv_summit_-_bridging_opnfv_and_etsi.pdf -* Yardstick Project presentation: https://wiki.opnfv.org/display/yardstick/Yardstick?preview=%2F2925202%2F2925208%2Fopnfv_summit_-_yardstick_project.pdf -* Yardstick wiki: https://wiki.opnfv.org/yardstick - -References used in Test Cases -============================= - -* cachestat: https://github.com/brendangregg/perf-tools/tree/master/fs -* cirros-image: https://download.cirros-cloud.net -* cyclictest: https://rt.wiki.kernel.org/index.php/Cyclictest -* DPDKpktgen: https://github.com/Pktgen/Pktgen-DPDK/ -* DPDK supported NICs: http://dpdk.org/doc/nics -* fdisk: http://www.tldp.org/HOWTO/Partition/fdisk_partitioning.html -* fio: http://www.bluestop.org/fio/HOWTO.txt -* free: http://manpages.ubuntu.com/manpages/trusty/en/man1/free.1.html -* iperf3: https://iperf.fr/ -* iostat: http://linux.die.net/man/1/iostat -* Lmbench man-pages: http://manpages.ubuntu.com/manpages/trusty/lat_mem_rd.8.html -* Memory bandwidth man-pages: http://manpages.ubuntu.com/manpages/trusty/bw_mem.8.html -* mpstat man-pages: http://manpages.ubuntu.com/manpages/trusty/man1/mpstat.1.html -* netperf: http://www.netperf.org/netperf/training/Netperf.html -* pktgen: https://www.kernel.org/doc/Documentation/networking/pktgen.txt -* RAMspeed: http://alasir.com/software/ramspeed/ -* sar: http://linux.die.net/man/1/sar -* SR-IOV: https://wiki.openstack.org/wiki/SR-IOV-Passthrough-For-Networking -* Storperf: https://wiki.opnfv.org/display/storperf/Storperf -* unixbench: https://github.com/kdlucas/byte-unixbench/blob/master/UnixBench - - -Research -======== - -* NCSRD: http://www.demokritos.gr/?lang=en -* T-NOVA: http://www.t-nova.eu/ -* T-NOVA Results: http://www.t-nova.eu/results/ - -Standards -========= - -* ETSI NFV: http://www.etsi.org/technologies-clusters/technologies/nfv -* ETSI GS-NFV TST 001: http://www.etsi.org/deliver/etsi_gs/NFV-TST/001_099/001/01.01.01_60/gs_NFV-TST001v010101p.pdf -* RFC2544: https://www.ietf.org/rfc/rfc2544.txt - diff --git a/docs/userguide/testcase_description_v2_template.rst b/docs/userguide/testcase_description_v2_template.rst deleted file mode 100644 index 91c2a7e33..000000000 --- a/docs/userguide/testcase_description_v2_template.rst +++ /dev/null @@ -1,64 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Ericsson AB and others. - -************************************* -Yardstick Test Case Description TCXXX -************************************* - -+-----------------------------------------------------------------------------+ -|test case slogan e.g. Network Latency | -| | -+--------------+--------------------------------------------------------------+ -|test case id | e.g. OPNFV_YARDSTICK_TC001_NW Latency | -| | | -+--------------+--------------------------------------------------------------+ -|metric | what will be measured, e.g. latency | -| | | -+--------------+--------------------------------------------------------------+ -|test purpose | describe what is the purpose of the test case | -| | | -+--------------+--------------------------------------------------------------+ -|configuration | what .yaml file to use, state SLA if applicable, state | -| | test duration, list and describe the scenario options used in| -| | this TC and also list the options using default values. | -| | | -+--------------+--------------------------------------------------------------+ -|test tool | e.g. ping | -| | | -+--------------+--------------------------------------------------------------+ -|references | e.g. RFCxxx, ETSI-NFVyyy | -| | | -+--------------+--------------------------------------------------------------+ -|applicability | describe variations of the test case which can be | -| | performend, e.g. run the test for different packet sizes | -| | | -+--------------+--------------------------------------------------------------+ -|pre-test | describe configuration in the tool(s) used to perform | -|conditions | the measurements (e.g. fio, pktgen), POD-specific | -| | configuration required to enable running the test | -| | | -+--------------+--------------------------------------------------------------+ -|test sequence | description and expected result | -| | | -+--------------+--------------------------------------------------------------+ -|step 1 | use this to describe tests that require sveveral steps e.g | -| | collect logs. | -| | | -| | Result: what happens in this step e.g. logs collected | -| | | -+--------------+--------------------------------------------------------------+ -|step 2 | remove interface | -| | | -| | Result: interface down. | -| | | -+--------------+--------------------------------------------------------------+ -|step N | what is done in step N | -| | | -| | Result: what happens | -| | | -+--------------+--------------------------------------------------------------+ -|test verdict | expected behavior, or SLA, pass/fail criteria | -| | | -+--------------+--------------------------------------------------------------+ -- cgit 1.2.3-korg