summaryrefslogtreecommitdiffstats
path: root/docs/testing/user/userguide/opnfv_yardstick_tc009.rst
blob: d6f44536146bedc07bc18d31384a95f0b5b93232 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
.. This work is licensed under a Creative Commons Attribution 4.0 International
.. License.
.. http://creativecommons.org/licenses/by/4.0
.. (c) OPNFV, Ericsson AB and others.

*************************************
Yardstick Test Case Description TC009
*************************************

.. _pktgen: https://www.kernel.org/doc/Documentation/networking/pktgen.txt

+-----------------------------------------------------------------------------+
|Packet Loss                                                                  |
|                                                                             |
+--------------+--------------------------------------------------------------+
|test case id  | OPNFV_YARDSTICK_TC009_NW PERF, Packet loss                   |
|              |                                                              |
+--------------+--------------------------------------------------------------+
|metric        | Number of flows, packets lost and throughput                 |
|              |                                                              |
+--------------+--------------------------------------------------------------+
|test purpose  | To evaluate the IaaS network performance with regards to     |
|              | flows and throughput, such as if and how different amounts   |
|              | of flows matter for the throughput between VMs on different  |
|              | compute blades.                                              |
|              | Typically e.g. the performance of a vSwitch                  |
|              | depends on the number of flows running through it. Also      |
|              | performance of other equipment or entities can depend        |
|              | on the number of flows or the packet sizes used.             |
|              | The purpose is also to be able to spot trends. Test results, |
|              | graphs ans similar shall be stored for comparison reasons and|
|              | product evolution understanding between different OPNFV      |
|              | versions and/or configurations.                              |
|              |                                                              |
+--------------+--------------------------------------------------------------+
|configuration | file: opnfv_yardstick_tc009.yaml                             |
|              |                                                              |
|              | Packet size: 64 bytes                                        |
|              |                                                              |
|              | Number of ports: 1, 10, 50, 100, 500 and 1000. The amount of |
|              | configured ports map from 2 up to 1001000 flows,             |
|              | respectively. Each port amount is run ten times, for 20      |
|              | seconds each. Then the next port_amount is run, and so on.   |
|              |                                                              |
|              | The client and server are distributed on different HW.       |
|              |                                                              |
|              | For SLA max_ppm is set to 1000.                              |
|              |                                                              |
+--------------+--------------------------------------------------------------+
|test tool     | pktgen                                                       |
|              |                                                              |
|              | (Pktgen is not always part of a Linux distribution, hence it |
|              | needs to be installed. It is part of the Yardstick Docker    |
|              | image.                                                       |
|              | As an example see the /yardstick/tools/ directory for how    |
|              | to generate a Linux image with pktgen included.)             |
|              |                                                              |
+--------------+--------------------------------------------------------------+
|references    | pktgen_                                                      |
|              |                                                              |
|              | ETSI-NFV-TST001                                              |
|              |                                                              |
+--------------+--------------------------------------------------------------+
|applicability | Test can be configured with different packet sizes, amount   |
|              | of flows and test duration. Default values exist.            |
|              |                                                              |
|              | SLA (optional): max_ppm: The number of packets per million   |
|              | packets sent that are acceptable to loose, not received.     |
|              |                                                              |
+--------------+--------------------------------------------------------------+
|pre-test      | The test case image needs to be installed into Glance        |
|conditions    | with pktgen included in it.                                  |
|              |                                                              |
|              | No POD specific requirements have been identified.           |
|              |                                                              |
+--------------+--------------------------------------------------------------+
|test sequence | description and expected result                              |
|              |                                                              |
+--------------+--------------------------------------------------------------+
|step 1        | The hosts are installed, as server and client. pktgen is     |
|              | invoked and logs are produced  and stored.                   |
|              |                                                              |
|              | Result: logs are stored.                                     |
|              |                                                              |
+--------------+--------------------------------------------------------------+
|test verdict  | Fails only if SLA is not passed, or if there is a test case  |
|              | execution problem.                                           |
|              |                                                              |
+--------------+--------------------------------------------------------------+
**Arithmetic:** Every test run arithmetically steps the specified input value(s) in the test scenario, adding a value to the previous input value. It is also possible to combine several input values for the same test case in different combinations. Snippet of an Arithmetic runner configuration: :: runner: type: Arithmetic iterators: - name: stride start: 64 stop: 128 step: 64 **Duration:** The test runs for a specific period of time before completed. Snippet of a Duration runner configuration: :: runner: type: Duration duration: 30 **Sequence:** The test changes a specified input value to the scenario. The input values to the sequence are specified in a list in the benchmark configuration file. Snippet of a Sequence runner configuration: :: runner: type: Sequence scenario_option_name: packetsize sequence: - 100 - 200 - 250 **Iteration:** Tests are run a specified number of times before completed. Snippet of an Iteration runner configuration: :: runner: type: Iteration iterations: 2 Use-Case View ============= Yardstick Use-Case View shows two kinds of users. One is the Tester who will do testing in cloud, the other is the User who is more concerned with test result and result analyses. For testers, they will run a single test case or test case suite to verify infrastructure compliance or bencnmark their own infrastructure performance. Test result will be stored by dispatcher module, three kinds of store method (file, influxdb and http) can be configured. The detail information of scenarios and runners can be queried with CLI by testers. For users, they would check test result with four ways. If dispatcher module is configured as file(default), there are two ways to check test result. One is to get result from yardstick.out ( default path: /tmp/yardstick.out), the other is to get plot of test result, it will be shown if users execute command "yardstick-plot". If dispatcher module is configured as influxdb, users will check test result on Grafana which is most commonly used for visualizing time series data. If dispatcher module is configured as http, users will check test result on OPNFV testing dashboard which use MongoDB as backend. .. image:: images/Use_case.png :width: 800px :alt: Yardstick Use-Case View Logical View ============ Yardstick Logical View describes the most important classes, their organization, and the most important use-case realizations. Main classes: **TaskCommands** - "yardstick task" subcommand handler. **HeatContext** - Do test yaml file context section model convert to HOT, deploy and undeploy Openstack heat stack. **Runner** - Logic that determines how a test scenario is run and reported. **TestScenario** - Type/class of measurement for example Ping, Pktgen, (Iperf, LmBench, ...) **Dispatcher** - Choose user defined way to store test results. TaskCommands is the "yardstick task" subcommand's main entry. It takes yaml file (e.g. test.yaml) as input, and uses HeatContext to convert the yaml file's context section to HOT. After Openstack heat stack is deployed by HeatContext with the converted HOT, TaskCommands use Runner to run specified TestScenario. During first runner initialization, it will create output process. The output process use Dispatcher to push test results. The Runner will also create a process to execute TestScenario. And there is a multiprocessing queue between each runner process and output process, so the runner process can push the real-time test results to the storage media. TestScenario is commonly connected with VMs by using ssh. It sets up VMs and run test measurement scripts through the ssh tunnel. After all TestScenaio is finished, TaskCommands will undeploy the heat stack. Then the whole test is finished. .. image:: images/Yardstick_framework_architecture_in_D.png :width: 800px :alt: Yardstick framework architecture in Danube Process View (Test execution flow) ================================== Yardstick process view shows how yardstick runs a test case. Below is the sequence graph about the test execution flow using heat context, and each object represents one module in yardstick: .. image:: images/test_execution_flow.png :width: 800px :alt: Yardstick Process View A user wants to do a test with yardstick. He can use the CLI to input the command to start a task. "TaskCommands" will receive the command and ask "HeatContext" to parse the context. "HeatContext" will then ask "Model" to convert the model. After the model is generated, "HeatContext" will inform "Openstack" to deploy the heat stack by heat template. After "Openstack" deploys the stack, "HeatContext" will inform "Runner" to run the specific test case. Firstly, "Runner" would ask "TestScenario" to process the specific scenario. Then "TestScenario" will start to log on the openstack by ssh protocal and execute the test case on the specified VMs. After the script execution finishes, "TestScenario" will send a message to inform "Runner". When the testing job is done, "Runner" will inform "Dispatcher" to output the test result via file, influxdb or http. After the result is output, "HeatContext" will call "Openstack" to undeploy the heat stack. Once the stack is undepoyed, the whole test ends. Deployment View =============== Yardstick deployment view shows how the yardstick tool can be deployed into the underlying platform. Generally, yardstick tool is installed on JumpServer(see `07-installation` for detail installation steps), and JumpServer is connected with other control/compute servers by networking. Based on this deployment, yardstick can run the test cases on these hosts, and get the test result for better showing. .. image:: images/Deployment.png :width: 800px :alt: Yardstick Deployment View Yardstick Directory structure ============================= **yardstick/** - Yardstick main directory. *tests/ci/* - Used for continuous integration of Yardstick at different PODs and with support for different installers. *docs/* - All documentation is stored here, such as configuration guides, user guides and Yardstick descriptions. *etc/* - Used for test cases requiring specific POD configurations. *samples/* - test case samples are stored here, most of all scenario and feature's samples are shown in this directory. *tests/* - Here both Yardstick internal tests (*functional/* and *unit/*) as well as the test cases run to verify the NFVI (*opnfv/*) are stored. Also configurations of what to run daily and weekly at the different PODs is located here. *tools/* - Currently contains tools to build image for VMs which are deployed by Heat. Currently contains how to build the yardstick-trusty-server image with the different tools that are needed from within the image. *plugin/* - Plug-in configuration files are stored here. *vTC/* - Contains the files for running the virtual Traffic Classifier tests. *yardstick/* - Contains the internals of Yardstick: Runners, Scenario, Contexts, CLI parsing, keys, plotting tools, dispatcher, plugin install/remove scripts and so on.