diff options
Diffstat (limited to 'docs')
28 files changed, 1633 insertions, 36 deletions
diff --git a/docs/userguide/architecture.rst b/docs/userguide/03-architecture.rst index 3abb67b7d..3abb67b7d 100755 --- a/docs/userguide/architecture.rst +++ b/docs/userguide/03-architecture.rst diff --git a/docs/userguide/apexlake_installation.rst b/docs/userguide/05-apexlake_installation.rst index d4493e0f8..d4493e0f8 100644 --- a/docs/userguide/apexlake_installation.rst +++ b/docs/userguide/05-apexlake_installation.rst diff --git a/docs/userguide/apexlake_api.rst b/docs/userguide/06-apexlake_api.rst index 35a1dbe3e..35a1dbe3e 100644 --- a/docs/userguide/apexlake_api.rst +++ b/docs/userguide/06-apexlake_api.rst diff --git a/docs/userguide/03-installation.rst b/docs/userguide/07-installation.rst index a3144ef2c..25c125851 100644 --- a/docs/userguide/03-installation.rst +++ b/docs/userguide/07-installation.rst @@ -251,3 +251,71 @@ More info about the tool can be found by executing: :: yardstick-plot -h + + +Deploy InfluxDB and Grafana locally +------------------------------------ + +.. pull docker images + +Pull docker images +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +:: + + docker pull tutum/influxdb + docker pull grafana/grafana + +Run influxdb and config +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +Run influxdb +:: + + docker run -d --name influxdb -p 8083:8083 -p 8086:8086 --expose 8090 --expose 8099 tutum/influxdb + docker exec -it influxdb bash + +Config influxdb +:: + + influx + >CREATE USER root WITH PASSWORD 'root' WITH ALL PRIVILEGES + >CREATE DATABASE yardstick; + >use yardstick; + >show MEASUREMENTS; + +Run grafana and config +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +Run grafana +:: + + docker run -d --name grafana -p 3000:3000 grafana/grafana + +Config grafana +:: + + http://{YOUR_IP_HERE}:3000 + log on using admin/admin and config database resource to be {YOUR_IP_HERE}:8086 + +.. image:: images/Grafana_config.png + +Config yardstick conf +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +cp ./etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf + +vi /etc/yardstick/yardstick.conf +Config yardstick.conf +:: + + [DEFAULT] + debug = True + dispatcher = influxdb + + [dispatcher_influxdb] + timeout = 5 + target = http://{YOUR_IP_HERE}:8086 + db_name = yardstick + username = root + password = root + +Now you can run yardstick test case and store the results in influxdb +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ diff --git a/docs/userguide/08-yardstick_plugin.rst b/docs/userguide/08-yardstick_plugin.rst new file mode 100644 index 000000000..e68db650d --- /dev/null +++ b/docs/userguide/08-yardstick_plugin.rst @@ -0,0 +1,144 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Ericsson AB, Huawei Technologies Co.,Ltd and others. + +=================================== +Installing a plug-in into yardstick +=================================== + +Abstract +======== + +Yardstick currently provides a ``plugin`` CLI command to support integration +with other OPNFV testing projects. Below is an example invocation of yardstick +plugin command and Storperf plug-in sample. + + +Installing Storperf into yardstick +================================== + +Storperf is delivered as a Docker container from +https://hub.docker.com/r/opnfv/storperf/tags/. + +There are two possible methods for installation in your environment: + +* Run container on Jump Host +* Run container in a VM + +In this introduction we will install Storperf on Jump Host. + + +Step 0: Environment preparation +>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> + +Running Storperf on Jump Host +Requirements: + +* Docker must be installed +* Jump Host must have access to the OpenStack Controller API +* Jump Host must have internet connectivity for downloading docker image +* Enough floating IPs must be available to match your agent count + +Before installing Storperf into yardstick you need to check your openstack +environment and other dependencies: + +1. Make sure docker is installed. +2. Make sure Keystone, Nova, Neutron, Glance, Heat are installed correctly. +3. Make sure Jump Host have access to the OpenStack Controller API. +4. Make sure Jump Host must have internet connectivity for downloading docker image. +5. You need to know where to get basic openstack Keystone authorization info, such as +OS_PASSWORD, OS_TENANT_NAME, OS_AUTH_URL, OS_USERNAME. +6. To run a Storperf container, you need to have OpenStack Controller environment +variables defined and passed to Storperf container. The best way to do this is to +put environment variables in a "storperf_admin-rc" file. The storperf_admin-rc +should include credential environment variables at least: + +* OS_AUTH_URL +* OS_TENANT_ID +* OS_TENANT_NAME +* OS_PROJECT_NAME +* OS_USERNAME +* OS_PASSWORD +* OS_REGION_NAME + +For this storperf_admin-rc file, during environment preparation a "prepare_storperf_admin-rc.sh" +script can be used to generate it. +:: + + #!/bin/bash + AUTH_URL=${OS_AUTH_URL} + USERNAME=${OS_USERNAME:-admin} + PASSWORD=${OS_PASSWORD:-console} + TENANT_NAME=${OS_TENANT_NAME:-admin} + VOLUME_API_VERSION=${OS_VOLUME_API_VERSION:-2} + PROJECT_NAME=${OS_PROJECT_NAME:-$TENANT_NAME} + TENANT_ID=`keystone tenant-get admin|grep 'id'|awk -F '|' '{print $3}'|sed -e 's/^[[:space:]]*//'` + rm -f ~/storperf_admin-rc + touch ~/storperf_admin-rc + echo "OS_AUTH_URL="$AUTH_URL >> ~/storperf_admin-rc + echo "OS_USERNAME="$USERNAME >> ~/storperf_admin-rc + echo "OS_PASSWORD="$PASSWORD >> ~/storperf_admin-rc + echo "OS_TENANT_NAME="$TENANT_NAME >> ~/storperf_admin-rc + echo "OS_VOLUME_API_VERSION="$VOLUME_API_VERSION >> ~/storperf_admin-rc + echo "OS_PROJECT_NAME="$PROJECT_NAME >> ~/storperf_admin-rc + echo "OS_TENANT_ID="$TENANT_ID >> ~/storperf_admin-rc + + +Step 1: Plug-in configuration file preparation +++++++++++++++++++++++++++++++++++++++++++++++ + +To install a plug-in, first you need to prepare a plug-in configuration file in +YAML format and store it in the "plugin" directory. The plugin configration file +work as the input of yardstick "plugin" command. Below is the Storperf plug-in +configuration file sample: +:: + + --- + # StorPerf plugin configuration file + # Used for integration StorPerf into Yardstick as a plugin + schema: "yardstick:plugin:0.1" + plugins: + name: storperf + deployment: + ip: 192.168.23.2 + user: root + password: root + +In the plug-in configuration file, you need to specify the plug-in name and the +plug-in deployment info, including node ip, node login username and password. +Here the Storperf will be installed on IP 192.168.23.2 which is the Jump Host +in my local environment. + +Step 2: Plug-in install/remove scripts preparation +++++++++++++++++++++++++++++++++++++++++++++++++++ + +Under "yardstick/resource/scripts directory", there are two folders: a "install" +folder and a "remove" folder. You need to store the plug-in install/remove script +in these two folders respectively. + +The detailed installation or remove operation should de defined in these two scripts. +The name of both install and remove scripts should match the plugin-in name that you +specified in the plug-in configuration file. +For example, the install and remove scripts for Storperf are both named to "storperf.bash". + + +Step 3: Install and remove Storperf ++++++++++++++++++++++++++++++++++++ + +To install Storperf, simply execute the following command +:: + + # Install Storperf + yardstick plugin install plugin/storperf.yaml + +removing Storperf from yardstick +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +To remove Storperf, simply execute the following command +:: + + # Remove Storperf + yardstick plugin remove plugin/storperf.yaml + +What yardstick plugin command does is using the username and password to log into the deployment target and then execute the corresponding install or remove script. diff --git a/docs/userguide/09-result-store-InfluxDB.rst b/docs/userguide/09-result-store-InfluxDB.rst new file mode 100644 index 000000000..5c49e9f7c --- /dev/null +++ b/docs/userguide/09-result-store-InfluxDB.rst @@ -0,0 +1,86 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, 2016 Huawei Technologies Co.,Ltd and others. + +============================================== +Store Other Project's Test Results in InfluxDB +============================================== + +Abstract +======== + +.. _Framework: https://wiki.opnfv.org/download/attachments/6827660/wiki.png?version=1&modificationDate=1470298075000&api=v2 + +This chapter illustrates how to run plug-in test cases and store test results +into community's InfluxDB. The framework is shown in Framework_. + + +.. image:: images/InfluxDB_store.png + :width: 1200px + :alt: Store Other Project's Test Results in InfluxDB + +Store Storperf Test Results into Community's InfluxDB +===================================================== + +.. _Influxdb: https://git.opnfv.org/cgit/yardstick/tree/yardstick/dispatcher/influxdb.py +.. _Mingjiang: limingjiang@huawei.com +.. _Visual: https://wiki.opnfv.org/download/attachments/6827660/tc074.PNG?version=1&modificationDate=1470298075000&api=v2 +.. _Login: http://testresults.opnfv.org/grafana/login + +As shown in Framework_, there are two ways to store Storperf test results +into community's InfluxDB: + +1. Yardstick asks Storperf to run the test case. After the test case is + completed, Yardstick reads test results via ReST API from Storperf and + posts test data to the influxDB. + +2. Additionally, Storperf can run tests by itself and post the test result + directly to the InfluxDB. The method for posting data directly to influxDB + will be supported in the future. + +Our plan is to support rest-api in D release so that other testing projects can +call the rest-api to use yardstick dispatcher service to push data to yardstick's +influxdb database. + +For now, influxdb only support line protocol, and the json protocol is deprecated. + +Take ping test case for example, the raw_result is json format like this: +:: + + "benchmark": { + "timestamp": 1470315409.868095, + "errors": "", + "data": { + "rtt": { + "ares": 1.125 + } + }, + "sequence": 1 + }, + "runner_id": 2625 + } + +With the help of "influxdb_line_protocol", the json is transform to like below as a line string: +:: + + 'ping,deploy_scenario=unknown,host=athena.demo,installer=unknown,pod_name=unknown, + runner_id=2625,scenarios=Ping,target=ares.demo,task_id=77755f38-1f6a-4667-a7f3- + 301c99963656,version=unknown rtt.ares=1.125 1470315409868094976' + +So, for data output of json format, you just need to transform json into line format and call +influxdb api to post the data into the database. All this function has been implemented in Influxdb_. +If you need support on this, please contact Mingjiang_. +:: + + curl -i -XPOST 'http://104.197.68.199:8086/write?db=yardstick' -- + data-binary 'ping,deploy_scenario=unknown,host=athena.demo,installer=unknown, ...' + +Grafana will be used for visualizing the collected test data, which is shown in Visual_. Grafana +can be accessed by Login_. + + +.. image:: images/results_visualization.png + :width: 1200px + :alt: results visualization + diff --git a/docs/userguide/03-list-of-tcs.rst b/docs/userguide/10-list-of-tcs.rst index de48c7bc7..7e8c85433 100644 --- a/docs/userguide/03-list-of-tcs.rst +++ b/docs/userguide/10-list-of-tcs.rst @@ -28,6 +28,7 @@ Generic NFVI Test Case Descriptions opnfv_yardstick_tc001.rst opnfv_yardstick_tc002.rst + opnfv_yardstick_tc004.rst opnfv_yardstick_tc005.rst opnfv_yardstick_tc008.rst opnfv_yardstick_tc009.rst @@ -38,6 +39,17 @@ Generic NFVI Test Case Descriptions opnfv_yardstick_tc024.rst opnfv_yardstick_tc037.rst opnfv_yardstick_tc038.rst + opnfv_yardstick_tc042.rst + opnfv_yardstick_tc043.rst + opnfv_yardstick_tc044.rst + opnfv_yardstick_tc055.rst + opnfv_yardstick_tc061.rst + opnfv_yardstick_tc063.rst + opnfv_yardstick_tc069.rst + opnfv_yardstick_tc070.rst + opnfv_yardstick_tc071.rst + opnfv_yardstick_tc072.rst + opnfv_yardstick_tc075.rst OPNFV Feature Test Cases ======================== diff --git a/docs/userguide/images/Grafana_config.png b/docs/userguide/images/Grafana_config.png Binary files differnew file mode 100644 index 000000000..cb63098dc --- /dev/null +++ b/docs/userguide/images/Grafana_config.png diff --git a/docs/userguide/images/InfluxDB_store.png b/docs/userguide/images/InfluxDB_store.png Binary files differnew file mode 100644 index 000000000..1770fd255 --- /dev/null +++ b/docs/userguide/images/InfluxDB_store.png diff --git a/docs/userguide/images/results_visualization.png b/docs/userguide/images/results_visualization.png Binary files differnew file mode 100644 index 000000000..cd092808b --- /dev/null +++ b/docs/userguide/images/results_visualization.png diff --git a/docs/userguide/index.rst b/docs/userguide/index.rst index 72a92a69f..0aa112a45 100644 --- a/docs/userguide/index.rst +++ b/docs/userguide/index.rst @@ -12,11 +12,13 @@ Yardstick Overview 01-introduction 02-methodology - architecture + 03-architecture 04-vtc-overview - apexlake_installation - apexlake_api - 03-installation - 03-list-of-tcs + 05-apexlake_installation + 06-apexlake_api + 07-installation + 08-yardstick_plugin + 09-result-store-InfluxDB + 10-list-of-tcs glossary references diff --git a/docs/userguide/opnfv_yardstick_tc001.rst b/docs/userguide/opnfv_yardstick_tc001.rst index 4cf4b94a4..fac375d50 100644 --- a/docs/userguide/opnfv_yardstick_tc001.rst +++ b/docs/userguide/opnfv_yardstick_tc001.rst @@ -21,14 +21,14 @@ Yardstick Test Case Description TC001 +--------------+--------------------------------------------------------------+ |test purpose | To evaluate the IaaS network performance with regards to | | | flows and throughput, such as if and how different amounts | -| | of flows matter for the throughput between hosts on different| -| | compute blades. Typically e.g. the performance of a vSwitch | -| | depends on the number of flows running through it. Also | -| | performance of other equipment or entities can depend | +| | of flows matter for the throughput between hosts on | +| | different compute blades. Typically e.g. the performance of | +| | a vSwitch depends on the number of flows running through it. | +| | Also performance of other equipment or entities can depend | | | on the number of flows or the packet sizes used. | | | The purpose is also to be able to spot trends. Test results, | -| | graphs ans similar shall be stored for comparison reasons and| -| | product evolution understanding between different OPNFV | +| | graphs ans similar shall be stored for comparison reasons | +| | and product evolution understanding between different OPNFV | | | versions and/or configurations. | | | | +--------------+--------------------------------------------------------------+ @@ -37,7 +37,8 @@ Yardstick Test Case Description TC001 | | Packet size: 60 bytes | | | Number of ports: 10, 50, 100, 500 and 1000, where each | | | runs for 20 seconds. The whole sequence is run | -| | twice. The client and server are distributed on different HW.| +| | twice. The client and server are distributed on different | +| | HW. | | | For SLA max_ppm is set to 1000. The amount of configured | | | ports map to between 110 up to 1001000 flows, respectively. | | | | @@ -60,7 +61,7 @@ Yardstick Test Case Description TC001 | | of flows and test duration. Default values exist. | | | | | | SLA (optional): max_ppm: The number of packets per million | -| | packets sent that are acceptable to loose, not received. | +| | packets sent that are acceptable to loose, not received. | | | | +--------------+--------------------------------------------------------------+ |pre-test | The test case image needs to be installed into Glance | diff --git a/docs/userguide/opnfv_yardstick_tc004.rst b/docs/userguide/opnfv_yardstick_tc004.rst new file mode 100644 index 000000000..301286126 --- /dev/null +++ b/docs/userguide/opnfv_yardstick_tc004.rst @@ -0,0 +1,77 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Huawei Technologies Co.,Ltd and others. + +************************************* +Yardstick Test Case Description TC004 +************************************* + +.. _cachestat: https://github.com/brendangregg/perf-tools/tree/master/fs + ++-----------------------------------------------------------------------------+ +|Cache Utilization | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC004_Cache Utilization | +| | | ++--------------+--------------------------------------------------------------+ +|metric | Cache Utilization | +| | | ++--------------+--------------------------------------------------------------+ +|test purpose | To evaluate the IaaS compute capability with regards to | +| | cache utilization.This test case should be run in parallel | +| | to other Yardstick test cases and not run as a stand-alone | +| | test case. | +| | Measure the cache usage statistics including cache hit, | +| | cache miss, hit ratio, page cache size and page cache size. | +| | Both average and maximun values are obtained. | +| | The purpose is also to be able to spot trends. | +| | Test results, graphs and similar shall be stored for | +| | comparison reasons and product evolution understanding | +| | between different OPNFV versions and/or configurations. | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | File: cachestat.yaml (in the 'samples' directory) | +| | | +| | * interval: 1 - repeat, pausing every 1 seconds in-between. | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | cachestat | +| | | +| | cachestat is not always part of a Linux distribution, hence | +| | it needs to be installed. | +| | | ++--------------+--------------------------------------------------------------+ +|references | cachestat_ | +| | | +| | ETSI-NFV-TST001 | +| | | ++--------------+--------------------------------------------------------------+ +|applicability | Test can be configured with different: | +| | | +| | * interval; | +| | * runner Duration. | +| | | +| | There are default values for each above-mentioned option. | +| | Run in background with other test cases. | +| | | ++--------------+--------------------------------------------------------------+ +|pre-test | The test case image needs to be installed into Glance | +|conditions | with cachestat included in the image. | +| | | +| | No POD specific requirements have been identified. | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | The host is installed as client. The related TC, or TCs, is | +| | invoked and cachestat logs are produced and stored. | +| | | +| | Result: logs are stored. | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | None. Cache utilization results are fetched and stored. | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/userguide/opnfv_yardstick_tc011.rst b/docs/userguide/opnfv_yardstick_tc011.rst index 1c643cd72..cf2fd5055 100644 --- a/docs/userguide/opnfv_yardstick_tc011.rst +++ b/docs/userguide/opnfv_yardstick_tc011.rst @@ -56,12 +56,12 @@ Yardstick Test Case Description TC011 | | ETSI-NFV-TST001 | | | | +--------------+--------------------------------------------------------------+ -|applicability | Test can be configured with different | +|applicability | Test can be configured with different: | | | | | | * bandwidth: Test case can be configured with different | -| | bandwidth | +| | bandwidth. | | | | -| | * duration: The test duration can be configured | +| | * duration: The test duration can be configured. | | | | | | * jitter: SLA is optional. The SLA in this test case | | | serves as an example. | diff --git a/docs/userguide/opnfv_yardstick_tc024.rst b/docs/userguide/opnfv_yardstick_tc024.rst index ffdacb106..8d15e8d2f 100644 --- a/docs/userguide/opnfv_yardstick_tc024.rst +++ b/docs/userguide/opnfv_yardstick_tc024.rst @@ -22,17 +22,17 @@ Yardstick Test Case Description TC024 |test purpose | To evaluate the CPU load performance of the IaaS. This test | | | case should be run in parallel to other Yardstick test cases | | | and not run as a stand-alone test case. | -| | | -| | The purpose is also to be able to spot trends. Test results, | -| | graphs ans similar shall be stored for comparison reasons and| -| | product evolution understanding between different OPNFV | -| | versions and/or configurations. | +| | Average, minimum and maximun values are obtained. | +| | The purpose is also to be able to spot trends. | +| | Test results, graphs and similar shall be stored for | +| | comparison reasons and product evolution understanding | +| | between different OPNFV versions and/or configurations. | | | | +--------------+--------------------------------------------------------------+ |configuration | file: cpuload.yaml (in the 'samples' directory) | | | | -| | There is are no additional configurations to be set for this | -| | TC. | +| | * interval: 1 - repeat, pausing every 1 seconds in-between. | +| | * count: 10 - display statistics 10 times, then exit. | | | | +--------------+--------------------------------------------------------------+ |test tool | mpstat | @@ -46,7 +46,14 @@ Yardstick Test Case Description TC024 |references | man-pages_ | | | | +--------------+--------------------------------------------------------------+ -|applicability | Run in background with other test cases. | +|applicability | Test can be configured with different: | +| | | +| | * interval; | +| | * count; | +| | * runner Iteration and intervals. | +| | | +| | There are default values for each above-mentioned option. | +| | Run in background with other test cases. | | | | +--------------+--------------------------------------------------------------+ |pre-test | The test case image needs to be installed into Glance | diff --git a/docs/userguide/opnfv_yardstick_tc042.rst b/docs/userguide/opnfv_yardstick_tc042.rst new file mode 100644 index 000000000..8660d9297 --- /dev/null +++ b/docs/userguide/opnfv_yardstick_tc042.rst @@ -0,0 +1,87 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, ZTE and others. + +*************************************** +Yardstick Test Case Description TC0042 +*************************************** + +.. _DPDK: http://dpdk.org/doc/guides/index.html +.. _Testpmd: http://dpdk.org/doc/guides/testpmd_app_ug/index.html +.. _Pktgen-dpdk: http://pktgen.readthedocs.io/en/latest/index.html + ++-----------------------------------------------------------------------------+ +|Network Performance | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC042_DPDK pktgen latency measurements | +| | | ++--------------+--------------------------------------------------------------+ +|metric | L2 Network Latency | +| | | ++--------------+--------------------------------------------------------------+ +|test purpose | Measure L2 network latency when DPDK is enabled between hosts| +| | on different compute blades. | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | file: opnfv_yardstick_tc042.yaml | +| | | +| | * Packet size: 64 bytes | +| | * SLA(max_latency): 100usec | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | DPDK_ | +| | Pktgen-dpdk_ | +| | | +| | (DPDK and Pktgen-dpdk are not part of a Linux distribution, | +| | hence they needs to be installed. | +| | As an example see the /yardstick/tools/ directory for how to | +| | generate a Linux image with DPDK and pktgen-dpdk included.) | +| | | ++--------------+--------------------------------------------------------------+ +|references | DPDK_ | +| | | +| | Pktgen-dpdk_ | +| | | +| | ETSI-NFV-TST001 | +| | | ++--------------+--------------------------------------------------------------+ +|applicability | Test can be configured with different packet sizes. Default | +| | values exist. | +| | | ++--------------+--------------------------------------------------------------+ +|pre-test | The test case image needs to be installed into Glance | +|conditions | with DPDK and pktgen-dpdk included in it. | +| | | +| | The NICs of compute nodes must support DPDK on POD. | +| | | +| | And at least compute nodes setup hugepage. | +| | | +| | If you want to achievement a hight performance result, it is | +| | recommend to use NUAM, CPU pin, OVS and so on. | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | The hosts are installed on different blades, as server and | +| | client. Both server and client have three interfaces. The | +| | first one is management such as ssh. The other two are used | +| | by DPDK. | +| | | ++--------------+--------------------------------------------------------------+ +|step 2 | Testpmd_ is invoked with configurations to forward packets | +| | from one DPDK port to the other on server. | +| | | ++--------------+--------------------------------------------------------------+ +|step 3 | Pktgen-dpdk is invoked with configurations as a traffic | +| | generator and logs are produced and stored on client. | +| | | +| | Result: Logs are stored. | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | Fails only if SLA is not passed, or if there is a test case | +| | execution problem. | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/userguide/opnfv_yardstick_tc043.rst b/docs/userguide/opnfv_yardstick_tc043.rst index 2f907e9ef..b6e557d86 100644 --- a/docs/userguide/opnfv_yardstick_tc043.rst +++ b/docs/userguide/opnfv_yardstick_tc043.rst @@ -13,8 +13,9 @@ Yardstick Test Case Description TC043 |Network Latency Between NFVI Nodes | | | +--------------+--------------------------------------------------------------+ -|test case id | OPNFV_YARDSTICK_TC043_Latency_between_ | -| | NFVI_nodes_measurements | +|test case id | OPNFV_YARDSTICK_TC043_Latency_between_NFVI_nodes_ | +| | measurements | +| | | +--------------+--------------------------------------------------------------+ |metric | RTT, Round Trip Time | | | | diff --git a/docs/userguide/opnfv_yardstick_tc050.rst b/docs/userguide/opnfv_yardstick_tc050.rst new file mode 100644 index 000000000..8890c9d53 --- /dev/null +++ b/docs/userguide/opnfv_yardstick_tc050.rst @@ -0,0 +1,135 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Yin Kanglin and others. +.. 14_ykl@tongji.edu.cn + +************************************* +Yardstick Test Case Description TC050 +************************************* + ++-----------------------------------------------------------------------------+ +|OpenStack Controller Node Network High Availability | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC050: OpenStack Controller Node Network | +| | High Availability | ++--------------+--------------------------------------------------------------+ +|test purpose | This test case will verify the high availability of control | +| | node. When one of the controller failed to connect the | +| | network, which breaks down the Openstack services on this | +| | node. These Openstack service should able to be accessed by | +| | other controller nodes, and the services on failed | +| | controller node should be isolated. | ++--------------+--------------------------------------------------------------+ +|test method | This test case turns off the network interfaces of a | +| | specified control node, then checks whether all services | +| | provided by the control node are OK with some monitor tools. | ++--------------+--------------------------------------------------------------+ +|attackers | In this test case, an attacker called "close-interface" is | +| | needed. This attacker includes three parameters: | +| | 1) fault_type: which is used for finding the attacker's | +| | scripts. It should be always set to "close-interface" in | +| | this test case. | +| | 2) host: which is the name of a control node being attacked. | +| | 3) interface: the network interface to be turned off. | +| | | +| | There are four instance of the "close-interface" monitor: | +| | attacker1(for public netork): | +| | -fault_type: "close-interface" | +| | -host: node1 | +| | -interface: "br-ex" | +| | attacker2(for management netork): | +| | -fault_type: "close-interface" | +| | -host: node1 | +| | -interface: "br-mgmt" | +| | attacker3(for storage netork): | +| | -fault_type: "close-interface" | +| | -host: node1 | +| | -interface: "br-storage" | +| | attacker4(for private netork): | +| | -fault_type: "close-interface" | +| | -host: node1 | +| | -interface: "br-mesh" | ++--------------+--------------------------------------------------------------+ +|monitors | In this test case, the monitor named "openstack-cmd" is | +| | needed. The monitor needs needs two parameters: | +| | 1) monitor_type: which is used for finding the monitor class | +| | and related scritps. It should be always set to | +| | "openstack-cmd" for this monitor. | +| | 2) command_name: which is the command name used for request | +| | | +| | There are four instance of the "openstack-cmd" monitor: | +| | monitor1: | +| | -monitor_type: "openstack-cmd" | +| | -command_name: "nova image-list" | +| | monitor2: | +| | -monitor_type: "openstack-cmd" | +| | -command_name: "neutron router-list" | +| | monitor3: | +| | -monitor_type: "openstack-cmd" | +| | -command_name: "heat stack-list" | +| | monitor4: | +| | -monitor_type: "openstack-cmd" | +| | -command_name: "cinder list" | ++--------------+--------------------------------------------------------------+ +|metrics | In this test case, there is one metric: | +| | 1)service_outage_time: which indicates the maximum outage | +| | time (seconds) of the specified Openstack command request. | ++--------------+--------------------------------------------------------------+ +|test tool | Developed by the project. Please see folder: | +| | "yardstick/benchmark/scenarios/availability/ha_tools" | +| | | ++--------------+--------------------------------------------------------------+ +|references | ETSI NFV REL001 | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | This test case needs two configuration files: | +| | 1) test case file: opnfv_yardstick_tc050.yaml | +| | -Attackers: see above "attackers" discription | +| | -waiting_time: which is the time (seconds) from the process | +| | being killed to stoping monitors the monitors | +| | -Monitors: see above "monitors" discription | +| | -SLA: see above "metrics" discription | +| | | +| | 2)POD file: pod.yaml | +| | The POD configuration should record on pod.yaml first. | +| | the "host" item in this test case will use the node name in | +| | the pod.yaml. | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | start monitors: | +| | each monitor will run with independently process | +| | | +| | Result: The monitor info will be collected. | +| | | ++--------------+--------------------------------------------------------------+ +|step 2 | do attacker: connect the host through SSH, and then execute | +| | the turnoff network interface script with param value | +| | specified by "interface". | +| | | +| | Result: Network interfaces will be turned down. | +| | | ++--------------+--------------------------------------------------------------+ +|step 3 | stop monitors after a period of time specified by | +| | "waiting_time" | +| | | +| | Result: The monitor info will be aggregated. | +| | | ++--------------+--------------------------------------------------------------+ +|step 4 | verify the SLA | +| | | +| | Result: The test case is passed or not. | +| | | ++--------------+--------------------------------------------------------------+ +|post-action | It is the action when the test cases exist. It turns up the | +| | network interface of the control node if it is not turned | +| | up. | ++--------------+--------------------------------------------------------------+ +|test verdict | Fails only if SLA is not passed, or if there is a test case | +| | execution problem. | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/userguide/opnfv_yardstick_tc051.rst b/docs/userguide/opnfv_yardstick_tc051.rst new file mode 100644 index 000000000..3402ccd92 --- /dev/null +++ b/docs/userguide/opnfv_yardstick_tc051.rst @@ -0,0 +1,117 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Yin Kanglin and others. +.. 14_ykl@tongji.edu.cn + +************************************* +Yardstick Test Case Description TC051 +************************************* + ++-----------------------------------------------------------------------------+ +|OpenStack Controller Node CPU Overload High Availability | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC051: OpenStack Controller Node CPU | +| | Overload High Availability | ++--------------+--------------------------------------------------------------+ +|test purpose | This test case will verify the high availability of control | +| | node. When the CPU usage of a specified controller node is | +| | stressed to 100%, which breaks down the Openstack services | +| | on this node. These Openstack service should able to be | +| | accessed by other controller nodes, and the services on | +| | failed controller node should be isolated. | ++--------------+--------------------------------------------------------------+ +|test method | This test case stresses the CPU uasge of a specified control | +| | node to 100%, then checks whether all services provided by | +| | the environment are OK with some monitor tools. | ++--------------+--------------------------------------------------------------+ +|attackers | In this test case, an attacker called "stress-cpu" is | +| | needed. This attacker includes two parameters: | +| | 1) fault_type: which is used for finding the attacker's | +| | scripts. It should be always set to "stress-cpu" in | +| | this test case. | +| | 2) host: which is the name of a control node being attacked. | +| | e.g. | +| | -fault_type: "stress-cpu" | +| | -host: node1 | ++--------------+--------------------------------------------------------------+ +|monitors | In this test case, the monitor named "openstack-cmd" is | +| | needed. The monitor needs needs two parameters: | +| | 1) monitor_type: which is used for finding the monitor class | +| | and related scritps. It should be always set to | +| | "openstack-cmd" for this monitor. | +| | 2) command_name: which is the command name used for request | +| | | +| | There are four instance of the "openstack-cmd" monitor: | +| | monitor1: | +| | -monitor_type: "openstack-cmd" | +| | -command_name: "nova image-list" | +| | monitor2: | +| | -monitor_type: "openstack-cmd" | +| | -command_name: "neutron router-list" | +| | monitor3: | +| | -monitor_type: "openstack-cmd" | +| | -command_name: "heat stack-list" | +| | monitor4: | +| | -monitor_type: "openstack-cmd" | +| | -command_name: "cinder list" | ++--------------+--------------------------------------------------------------+ +|metrics | In this test case, there is one metric: | +| | 1)service_outage_time: which indicates the maximum outage | +| | time (seconds) of the specified Openstack command request. | ++--------------+--------------------------------------------------------------+ +|test tool | Developed by the project. Please see folder: | +| | "yardstick/benchmark/scenarios/availability/ha_tools" | +| | | ++--------------+--------------------------------------------------------------+ +|references | ETSI NFV REL001 | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | This test case needs two configuration files: | +| | 1) test case file: opnfv_yardstick_tc051.yaml | +| | -Attackers: see above "attackers" discription | +| | -waiting_time: which is the time (seconds) from the process | +| | being killed to stoping monitors the monitors | +| | -Monitors: see above "monitors" discription | +| | -SLA: see above "metrics" discription | +| | | +| | 2)POD file: pod.yaml | +| | The POD configuration should record on pod.yaml first. | +| | the "host" item in this test case will use the node name in | +| | the pod.yaml. | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | start monitors: | +| | each monitor will run with independently process | +| | | +| | Result: The monitor info will be collected. | +| | | ++--------------+--------------------------------------------------------------+ +|step 2 | do attacker: connect the host through SSH, and then execute | +| | the stress cpu script on the host. | +| | | +| | Result: The CPU usage of the host will be stressed to 100%. | +| | | ++--------------+--------------------------------------------------------------+ +|step 3 | stop monitors after a period of time specified by | +| | "waiting_time" | +| | | +| | Result: The monitor info will be aggregated. | +| | | ++--------------+--------------------------------------------------------------+ +|step 4 | verify the SLA | +| | | +| | Result: The test case is passed or not. | +| | | ++--------------+--------------------------------------------------------------+ +|post-action | It is the action when the test cases exist. It kills the | +| | process that stresses the CPU usage. | ++--------------+--------------------------------------------------------------+ +|test verdict | Fails only if SLA is not passed, or if there is a test case | +| | execution problem. | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/userguide/opnfv_yardstick_tc052.rst b/docs/userguide/opnfv_yardstick_tc052.rst new file mode 100644 index 000000000..9514b6819 --- /dev/null +++ b/docs/userguide/opnfv_yardstick_tc052.rst @@ -0,0 +1,141 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Yin Kanglin and others. +.. 14_ykl@tongji.edu.cn + +************************************* +Yardstick Test Case Description TC052 +************************************* + ++-----------------------------------------------------------------------------+ +|OpenStack Controller Node Disk I/O Block High Availability | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC052: OpenStack Controller Node Disk I/O | +| | Block High Availability | ++--------------+--------------------------------------------------------------+ +|test purpose | This test case will verify the high availability of control | +| | node. When the disk I/O of a specified disk is blocked, | +| | which breaks down the Openstack services on this node. Read | +| | and write services should still be accessed by other | +| | controller nodes, and the services on failed controller node | +| | should be isolated. | ++--------------+--------------------------------------------------------------+ +|test method | This test case blocks the disk I/O of a specified control | +| | node, then checks whether the services that need to read or | +| | wirte the disk of the control node are OK with some monitor | +| | tools. | ++--------------+--------------------------------------------------------------+ +|attackers | In this test case, an attacker called "disk-block" is | +| | needed. This attacker includes two parameters: | +| | 1) fault_type: which is used for finding the attacker's | +| | scripts. It should be always set to "disk-block" in this | +| | test case. | +| | 2) host: which is the name of a control node being attacked. | +| | e.g. | +| | -fault_type: "disk-block" | +| | -host: node1 | ++--------------+--------------------------------------------------------------+ +|monitors | In this test case, two kinds of monitor are needed: | +| | 1. the "openstack-cmd" monitor constantly request a specific | +| | Openstack command, which needs two parameters: | +| | 1) monitor_type: which is used for finding the monitor class | +| | and related scripts. It should be always set to | +| | "openstack-cmd" for this monitor. | +| | 2) command_name: which is the command name used for request. | +| | | +| | e.g. | +| | -monitor_type: "openstack-cmd" | +| | -command_name: "nova flavor-list" | +| | | +| | 2. the second monitor verifies the read and write function | +| | by a "operation" and a "result checker". | +| | the "operation" have two parameters: | +| | 1) operation_type: which is used for finding the operation | +| | class and related scripts. | +| | 2) action_parameter: parameters for the operation. | +| | the "result checker" have three parameters: | +| | 1) checker_type: which is used for finding the reuslt | +| | checker class and realted scripts. | +| | 2) expectedValue: the expected value for the output of the | +| | checker script. | +| | 3) condition: whether the expected value is in the output of | +| | checker script or is totally same with the output. | +| | | +| | In this case, the "operation" adds a flavor and the "result | +| | checker" checks whether ths flavor is created. Their | +| | parameters show as follows: | +| | operation: | +| | -operation_type: "nova-create-flavor" | +| | -action_parameter: | +| | flavorconfig: "test-001 test-001 100 1 1" | +| | result checker: | +| | -checker_type: "check-flavor" | +| | -expectedValue: "test-001" | +| | -condition: "in" | ++--------------+--------------------------------------------------------------+ +|metrics | In this test case, there is one metric: | +| | 1)service_outage_time: which indicates the maximum outage | +| | time (seconds) of the specified Openstack command request. | ++--------------+--------------------------------------------------------------+ +|test tool | Developed by the project. Please see folder: | +| | "yardstick/benchmark/scenarios/availability/ha_tools" | +| | | ++--------------+--------------------------------------------------------------+ +|references | ETSI NFV REL001 | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | This test case needs two configuration files: | +| | 1) test case file: opnfv_yardstick_tc052.yaml | +| | -Attackers: see above "attackers" discription | +| | -waiting_time: which is the time (seconds) from the process | +| | being killed to stoping monitors the monitors | +| | -Monitors: see above "monitors" discription | +| | -SLA: see above "metrics" discription | +| | | +| | 2)POD file: pod.yaml | +| | The POD configuration should record on pod.yaml first. | +| | the "host" item in this test case will use the node name in | +| | the pod.yaml. | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | do attacker: connect the host through SSH, and then execute | +| | the block disk I/O script on the host. | +| | | +| | Result: The disk I/O of the host will be blocked | +| | | ++--------------+--------------------------------------------------------------+ +|step 2 | start monitors: | +| | each monitor will run with independently process | +| | | +| | Result: The monitor info will be collected. | +| | | ++--------------+--------------------------------------------------------------+ +|step 3 | do operation: add a flavor | +| | | ++--------------+--------------------------------------------------------------+ +|step 4 | do result checker: check whether the falvor is created | +| | | ++--------------+--------------------------------------------------------------+ +|step 5 | stop monitors after a period of time specified by | +| | "waiting_time" | +| | | +| | Result: The monitor info will be aggregated. | +| | | ++--------------+--------------------------------------------------------------+ +|step 6 | verify the SLA | +| | | +| | Result: The test case is passed or not. | +| | | ++--------------+--------------------------------------------------------------+ +|post-action | It is the action when the test cases exist. It excutes the | +| | release disk I/O script to release the blocked I/O. | ++--------------+--------------------------------------------------------------+ +|test verdict | Fails if monnitor SLA is not passed or the result checker is | +| | not passed, or if there is a test case execution problem. | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/userguide/opnfv_yardstick_tc053.rst b/docs/userguide/opnfv_yardstick_tc053.rst new file mode 100644 index 000000000..8808d12d9 --- /dev/null +++ b/docs/userguide/opnfv_yardstick_tc053.rst @@ -0,0 +1,142 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Yin Kanglin and others. +.. 14_ykl@tongji.edu.cn + +************************************* +Yardstick Test Case Description TC053 +************************************* + ++-----------------------------------------------------------------------------+ +|OpenStack Controller Load Balance Service High Availability | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC053: OpenStack Controller Load Balance | +| | Service High Availability- | ++--------------+--------------------------------------------------------------+ +|test purpose | This test case will verify the high availability of the | +| | load balance service(current is HAProxy) that supports | +| | OpenStack on controller node. When the load balance service | +| | of a specified controller node is killed, whether other load | +| | balancers on other controller nodes will work, and whether | +| | the controller node will restart the load balancer are | +| | checked. | ++--------------+--------------------------------------------------------------+ +|test method | This test case kills the processes of load balance service | +| | on a selected control node, then checks whether the request | +| | of the related Openstack command is OK and the killed | +| | processes are recovered. | ++--------------+--------------------------------------------------------------+ +|attackers | In this test case, an attacker called "kill-process" is | +| | needed. This attacker includes three parameters: | +| | 1) fault_type: which is used for finding the attacker's | +| | scripts. It should be always set to "kill-process" in this | +| | test case. | +| | 2) process_name: which is the process name of the specified | +| | OpenStack service. If there are multiple processes use the | +| | same name on the host, all of them are killed by this | +| | attacker. | +| | In this case. This parameter should always set to "swift- | +| | proxy". | +| | 3) host: which is the name of a control node being attacked. | +| | | +| | e.g. | +| | -fault_type: "kill-process" | +| | -process_name: "haproxy" | +| | -host: node1 | +| | | ++--------------+--------------------------------------------------------------+ +|monitors | In this test case, two kinds of monitor are needed: | +| | 1. the "openstack-cmd" monitor constantly request a specific | +| | Openstack command, which needs two parameters: | +| | 1) monitor_type: which is used for finding the monitor class | +| | and related scritps. It should be always set to | +| | "openstack-cmd" for this monitor. | +| | 2) command_name: which is the command name used for request. | +| | | +| | 2. the "process" monitor check whether a process is running | +| | on a specific node, which needs three parameters: | +| | 1) monitor_type: which used for finding the monitor class | +| | and related scripts. It should be always set to "process" | +| | for this monitor. | +| | 2) process_name: which is the process name for monitor | +| | 3) host: which is the name of the node runing the process | +| | In this case, the command_name of monitor1 should be | +| | services that is supported by load balancer and the process- | +| | name of monitor2 should be "haproxy", for example: | +| | | +| | e.g. | +| | monitor1: | +| | -monitor_type: "openstack-cmd" | +| | -command_name: "nova image-list" | +| | monitor2: | +| | -monitor_type: "process" | +| | -process_name: "haproxy" | +| | -host: node1 | +| | | ++--------------+--------------------------------------------------------------+ +|metrics | In this test case, there are two metrics: | +| | 1)service_outage_time: which indicates the maximum outage | +| | time (seconds) of the specified Openstack command request. | +| | 2)process_recover_time: which indicates the maximun time | +| | (seconds) from the process being killed to recovered | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | Developed by the project. Please see folder: | +| | "yardstick/benchmark/scenarios/availability/ha_tools" | +| | | ++--------------+--------------------------------------------------------------+ +|references | ETSI NFV REL001 | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | This test case needs two configuration files: | +| | 1) test case file: opnfv_yardstick_tc053.yaml | +| | -Attackers: see above "attackers" discription | +| | -waiting_time: which is the time (seconds) from the process | +| | being killed to stoping monitors the monitors | +| | -Monitors: see above "monitors" discription | +| | -SLA: see above "metrics" discription | +| | | +| | 2)POD file: pod.yaml | +| | The POD configuration should record on pod.yaml first. | +| | the "host" item in this test case will use the node name in | +| | the pod.yaml. | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | start monitors: | +| | each monitor will run with independently process | +| | | +| | Result: The monitor info will be collected. | +| | | ++--------------+--------------------------------------------------------------+ +|step 2 | do attacker: connect the host through SSH, and then execute | +| | the kill process script with param value specified by | +| | "process_name" | +| | | +| | Result: Process will be killed. | +| | | ++--------------+--------------------------------------------------------------+ +|step 3 | stop monitors after a period of time specified by | +| | "waiting_time" | +| | | +| | Result: The monitor info will be aggregated. | +| | | ++--------------+--------------------------------------------------------------+ +|step 4 | verify the SLA | +| | | +| | Result: The test case is passed or not. | +| | | ++--------------+--------------------------------------------------------------+ +|post-action | It is the action when the test cases exist. It will check | +| | the status of the specified process on the host, and restart | +| | the process if it is not running for next test cases. | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | Fails only if SLA is not passed, or if there is a test case | +| | execution problem. | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/userguide/opnfv_yardstick_tc054.rst b/docs/userguide/opnfv_yardstick_tc054.rst new file mode 100644 index 000000000..7f92be2bc --- /dev/null +++ b/docs/userguide/opnfv_yardstick_tc054.rst @@ -0,0 +1,125 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Yin Kanglin and others. +.. 14_ykl@tongji.edu.cn + +************************************* +Yardstick Test Case Description TC054 +************************************* + ++-----------------------------------------------------------------------------+ +|OpenStack Virtual IP High Availability | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC054: OpenStack Virtual IP High | +| | Availability | ++--------------+--------------------------------------------------------------+ +|test purpose | This test case will verify the high availability for virtual | +| | ip in the environment. When master node of virtual ip is | +| | abnormally shutdown, connection to virtual ip and | +| | the services binded to the virtual IP it should be OK. | ++--------------+--------------------------------------------------------------+ +|test method | This test case shutdowns the virtual IP master node with | +| | some fault injection tools, then checks whether virtual ips | +| | can be pinged and services binded to virtual ip are OK with | +| | some monitor tools. | ++--------------+--------------------------------------------------------------+ +|attackers | In this test case, an attacker called "control-shutdown" is | +| | needed. This attacker includes two parameters: | +| | 1) fault_type: which is used for finding the attacker's | +| | scripts. It should be always set to "control-shutdown" in | +| | this test case. | +| | 2) host: which is the name of a control node being attacked. | +| | | +| | In this case the host should be the virtual ip master node, | +| | that means the host ip is the virtual ip, for exapmle: | +| | -fault_type: "control-shutdown" | +| | -host: node1(the VIP Master node) | ++--------------+--------------------------------------------------------------+ +|monitors | In this test case, two kinds of monitor are needed: | +| | 1. the "ip_status" monitor that pings a specific ip to check | +| | the connectivity of this ip, which needs two parameters: | +| | 1) monitor_type: which is used for finding the monitor class | +| | and related scripts. It should be always set to "ip_status" | +| | for this monitor. | +| | 2) ip_address: The ip to be pinged. In this case, ip_address | +| | should be the virtual IP. | +| | | +| | 2. the "openstack-cmd" monitor constantly request a specific | +| | Openstack command, which needs two parameters: | +| | 1) monitor_type: which is used for finding the monitor class | +| | and related scripts. It should be always set to | +| | "openstack-cmd" for this monitor. | +| | 2) command_name: which is the command name used for request. | +| | | +| | e.g. | +| | monitor1: | +| | -monitor_type: "ip_status" | +| | -host: 192.168.0.2 | +| | monitor2: | +| | -monitor_type: "openstack-cmd" | +| | -command_name: "nova image-list" | +| | | ++--------------+--------------------------------------------------------------+ +|metrics | In this test case, there are two metrics: | +| | 1) ping_outage_time: which-indicates the maximum outage time | +| | to ping the specified host. | +| | 2)service_outage_time: which indicates the maximum outage | +| | time (seconds) of the specified Openstack command request. | ++--------------+--------------------------------------------------------------+ +|test tool | Developed by the project. Please see folder: | +| | "yardstick/benchmark/scenarios/availability/ha_tools" | +| | | ++--------------+--------------------------------------------------------------+ +|references | ETSI NFV REL001 | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | This test case needs two configuration files: | +| | 1) test case file: opnfv_yardstick_tc054.yaml | +| | -Attackers: see above "attackers" discription | +| | -waiting_time: which is the time (seconds) from the process | +| | being killed to stoping monitors the monitors | +| | -Monitors: see above "monitors" discription | +| | -SLA: see above "metrics" discription | +| | | +| | 2)POD file: pod.yaml | +| | The POD configuration should record on pod.yaml first. | +| | the "host" item in this test case will use the node name in | +| | the pod.yaml. | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | start monitors: | +| | each monitor will run with independently process | +| | | +| | Result: The monitor info will be collected. | +| | | ++--------------+--------------------------------------------------------------+ +|step 2 | do attacker: connect the host through SSH, and then execute | +| | the shutdown script on the VIP master node. | +| | | +| | Result: VIP master node will be shutdown | +| | | ++--------------+--------------------------------------------------------------+ +|step 3 | stop monitors after a period of time specified by | +| | "waiting_time" | +| | | +| | Result: The monitor info will be aggregated. | +| | | ++--------------+--------------------------------------------------------------+ +|step 4 | verify the SLA | +| | | +| | Result: The test case is passed or not. | +| | | ++--------------+--------------------------------------------------------------+ +|post-action | It is the action when the test cases exist. It restarts the | +| | original VIP master node if it is not restarted. | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | Fails only if SLA is not passed, or if there is a test case | +| | execution problem. | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/userguide/opnfv_yardstick_tc061.rst b/docs/userguide/opnfv_yardstick_tc061.rst new file mode 100644 index 000000000..1d424414e --- /dev/null +++ b/docs/userguide/opnfv_yardstick_tc061.rst @@ -0,0 +1,88 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Huawei Technologies Co.,Ltd and others. + +************************************* +Yardstick Test Case Description TC061 +************************************* + +.. _man-pages: http://linux.die.net/man/1/sar + ++-----------------------------------------------------------------------------+ +|Network Utilization | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC061_Network Utilization | +| | | ++--------------+--------------------------------------------------------------+ +|metric | Network utilization | +| | | ++--------------+--------------------------------------------------------------+ +|test purpose | To evaluate the IaaS network capability with regards to | +| | network utilization, including Total number of packets | +| | received per second, Total number of packets transmitted per | +| | second, Total number of kilobytes received per second, Total | +| | number of kilobytes transmitted per second, Number of | +| | compressed packets received per second (for cslip etc.), | +| | Number of compressed packets transmitted per second, Number | +| | of multicast packets received per second, Utilization | +| | percentage of the network interface. | +| | This test case should be run in parallel to other Yardstick | +| | test cases and not run as a stand-alone test case. | +| | Measure the network usage statistics from the network devices| +| | Average, minimum and maximun values are obtained. | +| | The purpose is also to be able to spot trends. | +| | Test results, graphs and similar shall be stored for | +| | comparison reasons and product evolution understanding | +| | between different OPNFV versions and/or configurations. | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | File: netutilization.yaml (in the 'samples' directory) | +| | | +| | * interval: 1 - repeat, pausing every 1 seconds in-between. | +| | * count: 1 - display statistics 1 times, then exit. | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | sar | +| | | +| | The sar command writes to standard output the contents of | +| | selected cumulative activity counters in the operating | +| | system. | +| | sar is normally part of a Linux distribution, hence it | +| | doesn't needs to be installed. | +| | | ++--------------+--------------------------------------------------------------+ +|references | man-pages_ | +| | | +| | ETSI-NFV-TST001 | +| | | ++--------------+--------------------------------------------------------------+ +|applicability | Test can be configured with different: | +| | | +| | * interval; | +| | * count; | +| | * runner Iteration and intervals. | +| | | +| | There are default values for each above-mentioned option. | +| | Run in background with other test cases. | +| | | ++--------------+--------------------------------------------------------------+ +|pre-test | The test case image needs to be installed into Glance | +|conditions | with sar included in the image. | +| | | +| | No POD specific requirements have been identified. | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result. | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | The host is installed as client. The related TC, or TCs, is | +| | invoked and sar logs are produced and stored. | +| | | +| | Result: logs are stored. | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | None. Network utilization results are fetched and stored. | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/userguide/opnfv_yardstick_tc063.rst b/docs/userguide/opnfv_yardstick_tc063.rst new file mode 100644 index 000000000..a77653aa5 --- /dev/null +++ b/docs/userguide/opnfv_yardstick_tc063.rst @@ -0,0 +1,81 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Huawei Technologies Co.,Ltd and others. + +************************************* +Yardstick Test Case Description TC063 +************************************* + +.. _iostat: http://linux.die.net/man/1/iostat +.. _fdisk: http://www.tldp.org/HOWTO/Partition/fdisk_partitioning.html + ++-----------------------------------------------------------------------------+ +|Storage Capacity | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC063_Storage Capacity | +| | | ++--------------+--------------------------------------------------------------+ +|metric | Storage/disk size, block size | +| | Disk Utilization | ++--------------+--------------------------------------------------------------+ +|test purpose | This test case will check the parameters which could decide | +| | several models and each model has its specified task to | +| | measure. The test purposes are to measure disk size, block | +| | size and disk utilization. With the test results, we could | +| | evaluate the storage capacity of the host. | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | file: opnfv_yardstick_tc063.yaml | +| | | +| |* test_type: "disk_size" | +| |* runner: | +| | type: Iteration | +| | iterations: 1 - test is run 1 time iteratively. | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | fdisk | +| | A command-line utility that provides disk partitioning | +| | functions | +| | | +| | iostat | +| | This is a computer system monitor tool used to collect and | +| | show operating system storage input and output statistics. | ++--------------+--------------------------------------------------------------+ +|references | iostat_ | +| | fdisk_ | +| | | +| | ETSI-NFV-TST001 | +| | | ++--------------+--------------------------------------------------------------+ +|applicability | Test can be configured with different: | +| | | +| | * test_type: "disk size", "block size", "disk utilization" | +| | * interval: 1 - how ofter to stat disk utilization | +| | type: int | +| | unit: seconds | +| | * count: 15 - how many times to stat disk utilization | +| | type: int | +| | unit: na | +| | There are default values for each above-mentioned option. | +| | Run in background with other test cases. | +| | | ++--------------+--------------------------------------------------------------+ +|pre-test | The test case image needs to be installed into Glance | +|conditions | | +| | No POD specific requirements have been identified. | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | Output the specific storage capacity of disk information as | +| | the sequence into file. | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | The pod is available and the hosts are installed. Node5 is | +| | used and logs are produced and stored. | +| | | +| | Result: Logs are stored. | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | None. | ++--------------+--------------------------------------------------------------+ diff --git a/docs/userguide/opnfv_yardstick_tc069.rst b/docs/userguide/opnfv_yardstick_tc069.rst index 51807e246..af0e64fbf 100644 --- a/docs/userguide/opnfv_yardstick_tc069.rst +++ b/docs/userguide/opnfv_yardstick_tc069.rst @@ -9,6 +9,9 @@ Yardstick Test Case Description TC069 .. _RAMspeed: http://alasir.com/software/ramspeed/ +.. table:: + :class: longtable + +-----------------------------------------------------------------------------+ |Memory Bandwidth | | | @@ -25,9 +28,9 @@ Yardstick Test Case Description TC069 | | while reading and writing certain blocks of data (starting | | | from 1Kb and further in power of 2) continuously through ALU | | | and FPU respectively. | -| | Measure different aspects of memory performance via synthetic| -| | simulations. Each simulation consists of four performances | -| | (Copy, Scale, Add, Triad). | +| | Measure different aspects of memory performance via | +| | synthetic simulations. Each simulation consists of four | +| | performances (Copy, Scale, Add, Triad). | | | Test results, graphs and similar shall be stored for | | | comparison reasons and product evolution understanding | | | between different OPNFV versions and/or configurations. | @@ -37,12 +40,14 @@ Yardstick Test Case Description TC069 | | | | | * SLA (optional): 7000 (MBps) min_bandwidth: The minimum | | | amount of memory bandwidth that is accepted. | -| | * type_id: 1 - runs a specified benchmark (by an ID number): | +| | * type_id: 1 - runs a specified benchmark | +| | (by an ID number): | | | 1 -- INTmark [writing] 4 -- FLOATmark [writing] | | | 2 -- INTmark [reading] 5 -- FLOATmark [reading] | | | 3 -- INTmem 6 -- FLOATmem | -| | * block_size: 64 Megabytes - the maximum block size per array| -| | * load: 32 Gigabytes - the amount of data load per pass | +| | * block_size: 64 Megabytes - the maximum block | +| | size per array. | +| | * load: 32 Gigabytes - the amount of data load per pass. | | | * iterations: 5 - test is run 5 times iteratively. | | | * interval: 1 - there is 1 second delay between each | | | iteration. | @@ -52,8 +57,8 @@ Yardstick Test Case Description TC069 | | | | | RAMspeed is a free open source command line utility to | | | measure cache and memory performance of computer systems. | -| | RAMspeed is not always part of a Linux distribution, hence it| -| | needs to be installed in the test image. | +| | RAMspeed is not always part of a Linux distribution, hence | +| | it needs to be installed in the test image. | | | | +--------------+--------------------------------------------------------------+ |references | RAMspeed_ | @@ -83,8 +88,8 @@ Yardstick Test Case Description TC069 |test sequence | description and expected result | | | | +--------------+--------------------------------------------------------------+ -|step 1 | The host is installed as client. RAMspeed is invoked and logs| -| | are produced and stored. | +|step 1 | The host is installed as client. RAMspeed is invoked and | +| | logs are produced and stored. | | | | | | Result: logs are stored. | | | | diff --git a/docs/userguide/opnfv_yardstick_tc073.rst b/docs/userguide/opnfv_yardstick_tc073.rst new file mode 100644 index 000000000..a6499eabb --- /dev/null +++ b/docs/userguide/opnfv_yardstick_tc073.rst @@ -0,0 +1,81 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Huawei Technologies Co.,Ltd and others. + +************************************* +Yardstick Test Case Description TC073 +************************************* + +.. _netperf: http://www.netperf.org/netperf/training/Netperf.html + ++-----------------------------------------------------------------------------+ +|Throughput per NFVI node test | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC073_Network latency and throughput between | +| | nodes | +| | | ++--------------+--------------------------------------------------------------+ +|metric | Network latency and throughput | +| | | ++--------------+--------------------------------------------------------------+ +|test purpose | To evaluate the IaaS network performance with regards to | +| | flows and throughput, such as if and how different amounts | +| | of packet sizes and flows matter for the throughput between | +| | nodes in one pod. | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | file: opnfv_yardstick_tc073.yaml | +| | | +| | Packet size: default 1024 bytes. | +| | | +| | Test length: default 20 seconds. | +| | | +| | The client and server are distributed on different nodes. | +| | | +| | For SLA max_mean_latency is set to 100. | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | netperf | +| | Netperf is a software application that provides network | +| | bandwidth testing between two hosts on a network. It | +| | supports Unix domain sockets, TCP, SCTP, DLPI and UDP via | +| | BSD Sockets. Netperf provides a number of predefined tests | +| | e.g. to measure bulk (unidirectional) data transfer or | +| | request response performance. | +| | (netperf is not always part of a Linux distribution, hence | +| | it needs to be installed.) | +| | | ++--------------+--------------------------------------------------------------+ +|references | netperf Man pages | +| | ETSI-NFV-TST001 | +| | | ++--------------+--------------------------------------------------------------+ +|applicability | Test can be configured with different packet sizes and | +| | test duration. Default values exist. | +| | | +| | SLA (optional): max_mean_latency | +| | | ++--------------+--------------------------------------------------------------+ +|pre-test | The POD can be reached by external ip and logged on via ssh | +|conditions | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | Install netperf tool on each specified node, one is as the | +| | server, and the other as the client. | +| | | ++--------------+--------------------------------------------------------------+ +|step 2 | Log on to the client node and use the netperf command to | +| | execute the network performance test | +| | | ++--------------+--------------------------------------------------------------+ +|step 3 | The throughput results stored. | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | Fails only if SLA is not passed, or if there is a test case | +| | execution problem. | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/userguide/opnfv_yardstick_tc074.rst b/docs/userguide/opnfv_yardstick_tc074.rst new file mode 100644 index 000000000..c938f5dfd --- /dev/null +++ b/docs/userguide/opnfv_yardstick_tc074.rst @@ -0,0 +1,137 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Huawei Technologies Co.,Ltd and others. + +************************************* +Yardstick Test Case Description TC074 +************************************* + +.. Storperf: https://wiki.opnfv.org/display/storperf/Storperf + ++-----------------------------------------------------------------------------+ +|Storperf | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC074_Storperf | +| | | ++--------------+--------------------------------------------------------------+ +|metric | Storage performance | +| | | ++--------------+--------------------------------------------------------------+ +|test purpose | Storperf integration with yardstick. The purpose of StorPerf | +| | is to provide a tool to measure block and object storage | +| | performance in an NFVI. When complemented with a | +| | characterization of typical VF storage performance | +| | requirements, it can provide pass/fail thresholds for test, | +| | staging, and production NFVI environments. | +| | | +| | The benchmarks developed for block and object storage will | +| | be sufficiently varied to provide a good preview of expected | +| | storage performance behavior for any type of VNF workload. | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | file: opnfv_yardstick_tc074.yaml | +| | | +| | * agent_count: 1 - the number of VMs to be created | +| | * agent_image: "Ubuntu-14.04" - image used for creating VMs | +| | * public_network: "ext-net" - name of public network | +| | * volume_size: 2 - cinder volume size | +| | * block_sizes: "4096" - data block size | +| | * queue_depths: "4" | +| | * StorPerf_ip: "192.168.200.2" | +| | * query_interval: 10 - state query interval | +| | * timeout: 600 - maximum allowed job time | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | Storperf | +| | | +| | StorPerf is a tool to measure block and object storage | +| | performance in an NFVI. | +| | | +| | StorPerf is delivered as a Docker container from | +| | https://hub.docker.com/r/opnfv/storperf/tags/. | +| | | ++--------------+--------------------------------------------------------------+ +|references | Storperf_ | +| | | +| | ETSI-NFV-TST001 | +| | | ++--------------+--------------------------------------------------------------+ +|applicability | Test can be configured with different: | +| | | +| | * agent_count | +| | * volume_size | +| | * block_sizes | +| | * queue_depths | +| | * query_interval | +| | * timeout | +| | * target=[device or path] | +| | The path to either an attached storage device | +| | (/dev/vdb, etc) or a directory path (/opt/storperf) that | +| | will be used to execute the performance test. In the case | +| | of a device, the entire device will be used. If not | +| | specified, the current directory will be used. | +| | * workload=[workload module] | +| | If not specified, the default is to run all workloads. The | +| | workload types are: | +| | - rs: 100% Read, sequential data | +| | - ws: 100% Write, sequential data | +| | - rr: 100% Read, random access | +| | - wr: 100% Write, random access | +| | - rw: 70% Read / 30% write, random access | +| | * nossd: Do not perform SSD style preconditioning. | +| | * nowarm: Do not perform a warmup prior to | +| | measurements. | +| | * report= [job_id] | +| | Query the status of the supplied job_id and report on | +| | metrics. If a workload is supplied, will report on only | +| | that subset. | +| | | +| | There are default values for each above-mentioned option. | +| | | ++--------------+--------------------------------------------------------------+ +|pre-test | If you do not have an Ubuntu 14.04 image in Glance, you will | +|conditions | need to add one. A key pair for launching agents is also | +| | required. | +| | | +| | Storperf is required to be installed in the environment. | +| | There are two possible methods for Storperf installation: | +| | Run container on Jump Host | +| | Run container in a VM | +| | | +| | Running StorPerf on Jump Host | +| | Requirements: | +| | - Docker must be installed | +| | - Jump Host must have access to the OpenStack Controller | +| | API | +| | - Jump Host must have internet connectivity for | +| | downloading docker image | +| | - Enough floating IPs must be available to match your | +| | agent count | +| | | +| | Running StorPerf in a VM | +| | Requirements: | +| | - VM has docker installed | +| | - VM has OpenStack Controller credentials and can | +| | communicate with the Controller API | +| | - VM has internet connectivity for downloading the | +| | docker image | +| | - Enough floating IPs must be available to match your | +| | agent count | +| | | +| | No POD specific requirements have been identified. | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | The Storperf is installed and Ubuntu 14.04 image is stored | +| | in glance. TC is invoked and logs are produced and stored. | +| | | +| | Result: Logs are stored. | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | None. Storage performance results are fetched and stored. | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/userguide/opnfv_yardstick_tc075.rst b/docs/userguide/opnfv_yardstick_tc075.rst new file mode 100644 index 000000000..a6ff34447 --- /dev/null +++ b/docs/userguide/opnfv_yardstick_tc075.rst @@ -0,0 +1,60 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Huawei Technologies Co.,Ltd and others. + +************************************* +Yardstick Test Case Description TC075 +************************************* + + ++-----------------------------------------------------------------------------+ +|Network Capacity and Scale Testing | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC075_Network_Capacity_and_Scale_testing | +| | | ++--------------+--------------------------------------------------------------+ +|metric | Number of connections, Number of frames sent/received | +| | | ++--------------+--------------------------------------------------------------+ +|test purpose | To evaluate the network capacity and scale with regards to | +| | connections and frmaes. | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | file: opnfv_yardstick_tc075.yaml | +| | | +| | There is no additional configuration to be set for this TC. | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | netstar | +| | | +| | Netstat is normally part of any Linux distribution, hence it | +| | doesn't need to be installed. | +| | | ++--------------+--------------------------------------------------------------+ +|references | Netstat man page | +| | | +| | ETSI-NFV-TST001 | +| | | ++--------------+--------------------------------------------------------------+ +|applicability | This test case is mainly for evaluating network performance. | +| | | ++--------------+--------------------------------------------------------------+ +|pre_test | Each pod node must have netstat included in it. | +|conditions | | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | The pod is available. | +| | Netstat is invoked and logs are produced and stored. | +| | | +| | Result: Logs are stored. | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | None. Number of connections and frames are fetched and | +| | stored. | +| | | ++--------------+--------------------------------------------------------------+ |