summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
-rwxr-xr-x[-rw-r--r--]docs/testing/developer/devguide/devguide.rst250
-rw-r--r--tests/unit/benchmark/contexts/test_heat.py8
-rw-r--r--tests/unit/benchmark/contexts/test_model.py172
-rw-r--r--yardstick/benchmark/contexts/heat.py1
-rw-r--r--yardstick/benchmark/contexts/model.py32
5 files changed, 458 insertions, 5 deletions
diff --git a/docs/testing/developer/devguide/devguide.rst b/docs/testing/developer/devguide/devguide.rst
index 238fbd93c..3b54ad6e9 100644..100755
--- a/docs/testing/developer/devguide/devguide.rst
+++ b/docs/testing/developer/devguide/devguide.rst
@@ -63,6 +63,256 @@ How Yardstick works?
The installation and configuration of the Yardstick is described in the `user guide`_.
+How to work with test cases?
+----------------------------
+
+
+**Sample Test cases**
+
+Yardstick provides many sample test cases which are located at "samples" directory of repo.
+
+Sample test cases are designed as following goals:
+
+1. Helping user better understand yardstick features(including new feature and new test capacity).
+
+2. Helping developer to debug his new feature and test case before it is offical released.
+
+3. Helping other developers understand and verify the new patch before the patch merged.
+
+So developers should upload your sample test case as well when they are trying to upload a new patch which is about the yardstick new test case or new feature.
+
+
+**OPNFV Release Test cases**
+
+OPNFV Release test cases which are located at "tests/opnfv/test_cases" of repo.
+those test cases are runing by OPNFV CI jobs, It means those test cases should be more mature than sample test cases.
+OPNFV scenario owners can select related test cases and add them into the test suites which is represent the scenario.
+
+
+**Test case Description File**
+
+This section will introduce the meaning of the Test case description file.
+we will use ping.yaml as a example to show you how to understand the test case description file.
+In this Yaml file, you can easily find it consists of two sections. One is “Scenarios”, the other is “Context”.::
+
+ ---
+ # Sample benchmark task config file
+ # measure network latency using ping
+
+ schema: "yardstick:task:0.1"
+
+ {% set provider = provider or none %}
+ {% set physical_network = physical_network or 'physnet1' %}
+ {% set segmentation_id = segmentation_id or none %}
+ scenarios:
+ -
+ type: Ping
+ options:
+ packetsize: 200
+ host: athena.demo
+ target: ares.demo
+
+ runner:
+ type: Duration
+ duration: 60
+ interval: 1
+
+ sla:
+ max_rtt: 10
+ action: monitor
+
+ context:
+ name: demo
+ image: cirros-0.3.5
+ flavor: yardstick-flavor
+ user: cirros
+
+ placement_groups:
+ pgrp1:
+ policy: "availability"
+
+ servers:
+ athena:
+ floating_ip: true
+ placement: "pgrp1"
+ ares:
+ placement: "pgrp1"
+
+ networks:
+ test:
+ cidr: '10.0.1.0/24'
+ {% if provider == "vlan" %}
+ provider: {{provider}}
+ physical_network: {{physical_network}}
+ {% if segmentation_id %}
+ segmentation_id: {{segmentation_id}}
+ {% endif %}
+ {% endif %}
+
+
+"Contexts" section is the description of pre-condition of testing. As ping.yaml shown, you can configure the image, flavor , name ,affinity and network of Test VM(servers), with this section, you will get a pre-condition env for Testing.
+Yardstick will automatic setup the stack which are described in this section.
+In fact, yardstick use convert this section to heat template and setup the VMs by heat-client (Meanwhile, yardstick can support to convert this section to Kubernetes template to setup containers).
+
+Two Test VMs(athena and ares) are configured by keyword "servers".
+"flavor" will determine how many vCPU, how much memory for test VMs.
+As "yardstick-flavor" is a basic flavor which will be automatically created when you run command "yardstick env prepare". "yardstick-flavor" is "1 vCPU 1G RAM,3G Disk".
+"image" is the image name of test VMs. if you use cirros.3.5.0, you need fill the username of this image into "user". the "policy" of placement of Test VMs have two values (affinity and availability).
+"availability" means anti-affinity. In "network" section, you can configure which provide network and physical_network you want Test VMs use.
+you may need to configure segmentation_id when your network is vlan.
+
+Moreover, you can configure your specific flavor as below, yardstick will setup the stack for you. ::
+
+ flavor:
+ name: yardstick-new-flavor
+ vcpus: 12
+ ram: 1024
+ disk: 2
+
+
+Besides default heat stack, yardstick also allow you to setup other two types stack. they are "Node" and "Kubernetes". ::
+
+ context:
+ type: Kubernetes
+ name: k8s
+
+and ::
+
+ context:
+ type: Node
+ name: LF
+
+
+
+"Scenarios" section is the description of testing step, you can orchestrate the complex testing step through orchestrate scenarios.
+
+Each scenario will do one testing step, In one scenario, you can configure the type of scenario(operation), runner type and SLA of the scenario.
+
+For TC002, We only have one step , that is Ping from host VM to target VM. In this step, we also have some detail operation implement ( such as ssh to VM, ping from VM1 to VM2. Get the latency, verify the SLA, report the result).
+
+If you want to get this detail implement , you can check with the scenario.py file. For Ping scenario, you can find it in yardstick repo ( yardstick / yardstick / benchmark / scenarios / networking / ping.py)
+
+after you select the type of scenario( such as Ping), you will select one type of runner, there are 4 types of runner. Usually, we use the "Iteration" and "Duration". and Default is "Iteration".
+For Iteration, you can specify the iteration number and interval of iteration. ::
+
+ runner:
+ type: Iteration
+ iterations: 10
+ interval: 1
+
+That means yardstick will iterate the 10 times of Ping test and the interval of each iteration is one second.
+
+For Duration, you can specify the duration of this scenario and the interval of each ping test. ::
+
+ runner:
+ type: Duration
+ duration: 60
+ interval: 10
+
+That means yardstick will run the ping test as loop until the total time of this scenario reach the 60s and the interval of each loop is ten seconds.
+
+
+SLA is the criterion of this scenario. that depends on the scenario. different scenario can have different SLA metric.
+
+
+**How to write a new test case**
+
+Yardstick already provide a library of testing step. that means yardstick provide lots of type scenario.
+
+Basiclly, What you need to do is to orchestrate the scenario from the library.
+
+Here, We will show two cases. One is how to write a simple test case, the other is how to write a quite complex test case.
+
+
+Write a new simple test case
+
+First, you can image a basic test case description as below.
+
++-----------------------------------------------------------------------------+
+|Storage Performance |
+| |
++--------------+--------------------------------------------------------------+
+|metric | IOPS (Average IOs performed per second), |
+| | Throughput (Average disk read/write bandwidth rate), |
+| | Latency (Average disk read/write latency) |
+| | |
++--------------+--------------------------------------------------------------+
+|test purpose | The purpose of TC005 is to evaluate the IaaS storage |
+| | performance with regards to IOPS, throughput and latency. |
+| | |
++--------------+--------------------------------------------------------------+
+|test | fio test is invoked in a host VM on a compute blade, a job |
+|description | file as well as parameters are passed to fio and fio will |
+| | start doing what the job file tells it to do. |
+| | |
++--------------+--------------------------------------------------------------+
+|configuration | file: opnfv_yardstick_tc005.yaml |
+| | |
+| | IO types is set to read, write, randwrite, randread, rw. |
+| | IO block size is set to 4KB, 64KB, 1024KB. |
+| | fio is run for each IO type and IO block size scheme, |
+| | each iteration runs for 30 seconds (10 for ramp time, 20 for |
+| | runtime). |
+| | |
+| | For SLA, minimum read/write iops is set to 100, |
+| | minimum read/write throughput is set to 400 KB/s, |
+| | and maximum read/write latency is set to 20000 usec. |
+| | |
++--------------+--------------------------------------------------------------+
+|applicability | This test case can be configured with different: |
+| | |
+| | * IO types; |
+| | * IO block size; |
+| | * IO depth; |
+| | * ramp time; |
+| | * test duration. |
+| | |
+| | Default values exist. |
+| | |
+| | SLA is optional. The SLA in this test case serves as an |
+| | example. Considerably higher throughput and lower latency |
+| | are expected. However, to cover most configurations, both |
+| | baremetal and fully virtualized ones, this value should be |
+| | possible to achieve and acceptable for black box testing. |
+| | Many heavy IO applications start to suffer badly if the |
+| | read/write bandwidths are lower than this. |
+| | |
++--------------+--------------------------------------------------------------+
+|pre-test | The test case image needs to be installed into Glance |
+|conditions | with fio included in it. |
+| | |
+| | No POD specific requirements have been identified. |
+| | |
++--------------+--------------------------------------------------------------+
+|test sequence | description and expected result |
+| | |
++--------------+--------------------------------------------------------------+
+|step 1 | A host VM with fio installed is booted. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 2 | Yardstick is connected with the host VM by using ssh. |
+| | 'fio_benchmark' bash script is copyied from Jump Host to |
+| | the host VM via the ssh tunnel. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 3 | 'fio_benchmark' script is invoked. Simulated IO operations |
+| | are started. IOPS, disk read/write bandwidth and latency are |
+| | recorded and checked against the SLA. Logs are produced and |
+| | stored. |
+| | |
+| | Result: Logs are stored. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 4 | The host VM is deleted. |
+| | |
++--------------+--------------------------------------------------------------+
+|test verdict | Fails only if SLA is not passed, or if there is a test case |
+| | execution problem. |
+| | |
++--------------+--------------------------------------------------------------+
+
+TODO
+
How can I contribute to Yardstick?
-----------------------------------
diff --git a/tests/unit/benchmark/contexts/test_heat.py b/tests/unit/benchmark/contexts/test_heat.py
index 223d64060..f2e725df2 100644
--- a/tests/unit/benchmark/contexts/test_heat.py
+++ b/tests/unit/benchmark/contexts/test_heat.py
@@ -119,8 +119,12 @@ class HeatContextTestCase(unittest.TestCase):
"2f2e4997-0a8e-4eb7-9fa4-f3f8fbbc393b")
mock_template.add_security_group.assert_called_with("foo-secgroup")
# mock_template.add_network.assert_called_with("bar-fool-network", 'physnet1', None)
- mock_template.add_router.assert_called_with("bar-fool-network-router", netattrs["external_network"], "bar-fool-network-subnet")
- mock_template.add_router_interface.assert_called_with("bar-fool-network-router-if0", "bar-fool-network-router", "bar-fool-network-subnet")
+ mock_template.add_router.assert_called_with("bar-fool-network-router",
+ netattrs["external_network"],
+ "bar-fool-network-subnet")
+ mock_template.add_router_interface.assert_called_with("bar-fool-network-router-if0",
+ "bar-fool-network-router",
+ "bar-fool-network-subnet")
@mock.patch('yardstick.benchmark.contexts.heat.HeatTemplate')
def test_attrs_get(self, mock_template):
diff --git a/tests/unit/benchmark/contexts/test_model.py b/tests/unit/benchmark/contexts/test_model.py
index 5444c2bc8..3e94168b5 100644
--- a/tests/unit/benchmark/contexts/test_model.py
+++ b/tests/unit/benchmark/contexts/test_model.py
@@ -279,6 +279,178 @@ class ServerTestCase(unittest.TestCase):
user_data='',
scheduler_hints='hints')
+ def test_override_ip(self):
+ network_ports = {
+ 'mgmt': ['mgmt'],
+ 'uplink_0': [
+ {'xe0': {'local_ip': '10.44.0.20', 'netmask': '255.255.255.0'}},
+ ],
+ 'downlink_0': [
+ {'xe1': {'local_ip': '10.44.0.30', 'netmask': '255.255.255.0'}},
+ ],
+ }
+ attrs = {
+ 'image': 'some-image', 'flavor': 'some-flavor',
+ }
+ test_server = model.Server('foo', self.mock_context, attrs)
+ test_server.interfaces = {
+ "xe0": {
+ "local_ip": "1.2.3.4",
+ "netmask": "255.255.255.0",
+ },
+ "xe1": {
+ "local_ip": "1.2.3.5",
+ "netmask": "255.255.255.0"
+ }
+ }
+ test_server.network_ports = network_ports
+
+ test_server.override_ip("uplink_0", {"port": "xe0"})
+ self.assertEqual(test_server.interfaces["xe0"], network_ports["uplink_0"][0]["xe0"])
+
+ def test_override_ip_multiple(self):
+ network_ports = {
+ 'mgmt': ['mgmt'],
+ 'uplink_0': [
+ {'xe0': {'local_ip': '10.44.0.20', 'netmask': '255.255.255.0'}},
+ {'xe0': {'local_ip': '10.44.0.21', 'netmask': '255.255.255.0'}},
+ ],
+ 'downlink_0': [
+ {'xe1': {'local_ip': '10.44.0.30', 'netmask': '255.255.255.0'}},
+ ],
+ }
+ attrs = {
+ 'image': 'some-image', 'flavor': 'some-flavor',
+ }
+ test_server = model.Server('foo', self.mock_context, attrs)
+ test_server.interfaces = {
+ "xe0": {
+ "local_ip": "1.2.3.4",
+ "netmask": "255.255.255.0",
+ },
+ "xe1": {
+ "local_ip": "1.2.3.5",
+ "netmask": "255.255.255.0"
+ }
+ }
+ test_server.network_ports = network_ports
+ test_server.override_ip("uplink_0", {"port": "xe0"})
+ self.assertEqual(test_server.interfaces["xe0"], network_ports["uplink_0"][0]["xe0"])
+
+ def test_override_ip_mixed(self):
+ network_ports = {
+ 'mgmt': ['mgmt'],
+ 'uplink_0': [
+ 'xe0',
+ {'xe0': {'local_ip': '10.44.0.21', 'netmask': '255.255.255.0'}},
+ ],
+ 'downlink_0': [
+ {'xe1': {'local_ip': '10.44.0.30', 'netmask': '255.255.255.0'}},
+ ],
+ }
+ attrs = {
+ 'image': 'some-image', 'flavor': 'some-flavor',
+ }
+ test_server = model.Server('foo', self.mock_context, attrs)
+ test_server.interfaces = {
+ "xe0": {
+ "local_ip": "1.2.3.4",
+ "netmask": "255.255.255.0",
+ },
+ "xe1": {
+ "local_ip": "1.2.3.5",
+ "netmask": "255.255.255.0"
+ }
+ }
+ test_server.network_ports = network_ports
+ test_server.override_ip("uplink_0", {"port": "xe0"})
+ self.assertEqual(test_server.interfaces["xe0"], network_ports["uplink_0"][1]["xe0"])
+
+ @mock.patch('yardstick.benchmark.contexts.heat.HeatTemplate')
+ def test__add_instance_with_ip_override_invalid_syntax(self, mock_template):
+ network_ports = {
+ 'mgmt': ['mgmt'],
+ 'uplink_0': 'xe0',
+ 'downlink_0': [
+ {'xe1': {'local_ip': '10.44.0.30', 'netmask': '255.255.255.0'}},
+ ],
+ }
+ attrs = {
+ 'image': 'some-image', 'flavor': 'some-flavor',
+ }
+ test_server = model.Server('foo', self.mock_context, attrs)
+ test_server.network_ports = network_ports
+ context = type("Context", (object,), {})
+ # can't use Mock because Mock.name is reserved
+ context.name = "context"
+ networks = [model.Network(n, context, {}) for n in network_ports]
+
+ with self.assertRaises(SyntaxError):
+ test_server._add_instance(mock_template, 'some-server',
+ networks, 'hints')
+
+ @mock.patch('yardstick.benchmark.contexts.heat.HeatTemplate')
+ def test__add_instance_with_ip_override(self, mock_template):
+ network_ports = {
+ 'mgmt': ['mgmt'],
+ 'uplink_0': [
+ {'xe0': {'local_ip': '10.44.0.20', 'netmask': '255.255.255.0'}},
+ ],
+ 'downlink_0': [
+ {'xe1': {'local_ip': '10.44.0.30', 'netmask': '255.255.255.0'}},
+ ],
+ }
+ attrs = {
+ 'image': 'some-image', 'flavor': 'some-flavor',
+ }
+ test_server = model.Server('foo', self.mock_context, attrs)
+ test_server.network_ports = network_ports
+ context = type("Context", (object,), {})
+ # can't use Mock because Mock.name is reserved
+ context.name = "context"
+ networks = [model.Network(n, context, {}) for n in network_ports]
+
+ test_server._add_instance(mock_template, 'some-server',
+ networks, 'hints')
+ self.assertEqual(test_server.ports, {
+ 'downlink_0': [{'port': 'xe1', 'stack_name': 'some-server-xe1-port'}],
+ 'mgmt': [{'port': 'mgmt', 'stack_name': 'some-server-mgmt-port'}],
+ 'uplink_0': [{'port': 'xe0', 'stack_name': 'some-server-xe0-port'}]
+ })
+
+ @mock.patch('yardstick.benchmark.contexts.heat.HeatTemplate')
+ def test__add_instance_with_multiple_ip_override(self, mock_template):
+ network_ports = {
+ 'mgmt': ['mgmt'],
+ 'uplink_0': [
+ {'xe0': {'local_ip': '10.44.0.20', 'netmask': '255.255.255.0'}},
+ {'xe0': {'local_ip': '10.44.0.21', 'netmask': '255.255.255.0'}},
+ ],
+ 'downlink_0': [
+ {'xe1': {'local_ip': '10.44.0.30', 'netmask': '255.255.255.0'}},
+ ],
+ }
+ attrs = {
+ 'image': 'some-image', 'flavor': 'some-flavor',
+ }
+ test_server = model.Server('foo', self.mock_context, attrs)
+ test_server.network_ports = network_ports
+ context = type("Context", (object,), {})
+ # can't use Mock because Mock.name is reserved
+ context.name = "context"
+ networks = [model.Network(n, context, {}) for n in network_ports]
+
+ test_server._add_instance(mock_template, 'some-server',
+ networks, 'hints')
+ self.assertEqual(test_server.ports, {
+ 'downlink_0': [{'port': 'xe1', 'stack_name': 'some-server-xe1-port'}],
+ 'mgmt': [{'port': 'mgmt', 'stack_name': 'some-server-mgmt-port'}],
+ 'uplink_0': [{'port': 'xe0', 'stack_name': 'some-server-xe0-port'},
+ # this is not an error, we can produce this, it is left to Heat
+ # to detect duplicate ports and error
+ {'port': 'xe0', 'stack_name': 'some-server-xe0-port'}]
+ })
+
@mock.patch('yardstick.benchmark.contexts.heat.HeatTemplate')
def test__add_instance_with_user_data(self, mock_template):
user_data = "USER_DATA"
diff --git a/yardstick/benchmark/contexts/heat.py b/yardstick/benchmark/contexts/heat.py
index ff3e5f801..4ba543b9e 100644
--- a/yardstick/benchmark/contexts/heat.py
+++ b/yardstick/benchmark/contexts/heat.py
@@ -348,6 +348,7 @@ class HeatContext(Context):
port['port'],
port['stack_name'],
self.stack.outputs)
+ server.override_ip(network_name, port)
def make_interface_dict(self, network_name, port, stack_name, outputs):
private_ip = outputs[stack_name]
diff --git a/yardstick/benchmark/contexts/model.py b/yardstick/benchmark/contexts/model.py
index facfab892..97560c9f6 100644
--- a/yardstick/benchmark/contexts/model.py
+++ b/yardstick/benchmark/contexts/model.py
@@ -14,6 +14,8 @@ from __future__ import absolute_import
import six
import logging
+
+from collections import Mapping
from six.moves import range
@@ -240,6 +242,26 @@ class Server(Object): # pragma: no cover
Server.list.append(self)
+ def override_ip(self, network_name, port):
+ def find_port_overrides():
+ for p in ports:
+ # p can be string or dict
+ # we can't just use p[port['port'] in case p is a string
+ # and port['port'] is an int?
+ if isinstance(p, Mapping):
+ g = p.get(port['port'])
+ # filter out empty dicts
+ if g:
+ yield g
+
+ ports = self.network_ports.get(network_name, [])
+ intf = self.interfaces[port['port']]
+ for override in find_port_overrides():
+ intf['local_ip'] = override.get('local_ip', intf['local_ip'])
+ intf['netmask'] = override.get('netmask', intf['netmask'])
+ # only use the first value
+ break
+
@property
def image(self):
"""returns a server's image name"""
@@ -269,9 +291,13 @@ class Server(Object): # pragma: no cover
continue
else:
if isinstance(ports, six.string_types):
- if ports.startswith('-'):
- LOG.warning("possible YAML error, port name starts with - '%s", ports)
- ports = [ports]
+ # because strings are iterable we have to check specifically
+ raise SyntaxError("network_port must be a list '{}'".format(ports))
+ # convert port subdicts into their just port name
+ # port subdicts are used to override Heat IP address,
+ # but we just need the port name
+ # we allow duplicates here and let Heat raise the error
+ ports = [next(iter(p)) if isinstance(p, dict) else p for p in ports]
# otherwise add a port for every network with port name as network name
else:
ports = [network.name]