Age | Commit message (Collapse) | Author | Files | Lines |
|
JIRA: YARDSTICK-1108
We add this test case in ovs_dpdk-ha and ovs_dpdk-noha testsuites
also add image build into load_images script
Change-Id: I2b0c6b106dd98c3693df18dba46259ff8ef0a76e
Signed-off-by: liyin <liyin11@huawei.com>
|
|
|
|
|
|
|
|
atexit handler calls terminate_all after the regular python execution
path, and it looks like the traceback stack is None somehow.
In this context log.debug("", exc_info=True) doesn't work
it prints out NoneType on Python3 and
causes other problems on Python2.7
remove the exc_info=True from the logging call
JIRA: YARDSTICK-1107
Change-Id: Ida0a0ced7ff5e017e2f8608880e3bb531724af95
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
New scenarios from wiki: https://wiki.opnfv.org/display/SWREL/Fraser+Scenario+Statu
Change-Id: Ifd6e45e73be2bbb99743aa3f4981d22899aab92a
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
|
|
RabbitMQ service added in [1] is not correctly installed and initialized:
- There is an error during the installation process ("\" character
missing).
- In the installation script, the service needs to be started first.
- In the container installation, the service needs to be started via
supervisor.
[1]https://gerrit.opnfv.org/gerrit/#/c/53597/
JIRA: YARDSTICK-1103
Change-Id: Iade3d6ce4b522e6f576af71b7afe5559081f7929
Signed-off-by: Rodolfo Alonso Hernandez <rodolfo.alonso.hernandez@intel.com>
|
|
|
|
|
|
If the Task raised an exception we currently hide it
and replace it with RuntimeError. This is bad.
If an exception occured, then we don't have a result so
re-raise the original exception.
Or we could log the traceback and raise RuntimeError, but
that doesn't seem to be a good idea.
Sample traceback after re-raising original. Without this patch the ValueError is only written
to _write_error_data
2018-03-25 22:57:56,511 yardstick.benchmark.contexts.node node.py:85 DEBUG BareMetals: []
2018-03-25 22:57:56,511 yardstick.benchmark.contexts.node node.py:89 DEBUG Env: {}
2018-03-25 22:57:56,511 yardstick.cmd.commands.task task.py:57 INFO Task FAILED
Traceback (most recent call last):
File "/home/rbbratta/yardstick-upstream/yardstick/yardstick/cmd/commands/task.py", line 54, in do_start
result = Task().start(param, **kwargs)
File "/home/rbbratta/yardstick-upstream/yardstick/yardstick/benchmark/core/task.py", line 103, in start
task_args_fnames)
File "/home/rbbratta/yardstick-upstream/yardstick/yardstick/benchmark/core/task.py", line 321, in _parse_tasks
task_args_fnames[i]
File "/home/rbbratta/yardstick-upstream/yardstick/yardstick/benchmark/core/task.py", line 558, in parse_task
context.init(cfg_attrs)
File "/home/rbbratta/yardstick-upstream/yardstick/yardstick/benchmark/contexts/heat.py", line 131, in init
server = Server(name, self, server_attrs)
File "/home/rbbratta/yardstick-upstream/yardstick/yardstick/benchmark/contexts/model.py", line 210, in __init__
(name, p))
ValueError: server 'trafficgen_1', placement 'pgrp2' is invalid
2018-03-25 22:57:56,512 yardstick.cmd.commands.task task.py:62 INFO Task FAILED
2018-03-25 22:57:56,662 yardstick.benchmark.runners.base base.py:124 DEBUG Terminating all runners
NoneType
JIRA: YARDSTICK-1102
Change-Id: I7e6fa41fc1d36f6d438a1602ab60cb41ffbee1e9
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
We need to see what Heat failures occured
so that we can debug CI failures.
The only way to do this seems to be to use the
private event_utils.get_events function to
query for all failures
Also since we had a failure go ahead and dump the
Heat template that failed.
Sample output:
2018-03-25 23:50:08,765 [INFO] yardstick.orchestrator.heat heat.py:629 Creating stack 'yardstick-460ed969' START
2018-03-25 23:50:21,932 [ERROR] yardstick.orchestrator.heat heat.py:644 Resource CREATE failed: BadRequest: resources.yardstick-460ed969-xe1: Invalid input for operation: physical_network 'nosuch' unknown for flat provider network.
Neutron server returns request_ids: ['req-6f981f1e-a9e2-4114-af84-1ee528aed51b']
2018-03-25 23:50:21,933 [ERROR] yardstick.orchestrator.heat heat.py:644 BadRequest: resources.yardstick-460ed969-xe1: Invalid input for operation: physical_network 'nosuch' unknown for flat provider network.
Neutron server returns request_ids: ['req-6f981f1e-a9e2-4114-af84-1ee528aed51b']
2018-03-25 23:50:21,972 [ERROR] yardstick.orchestrator.heat heat.py:645 {'description': '\n'
'All referred generated resources are prefixed with the '
'template\n'
'name (i.e. yardstick-460ed969).\n',
'heat_template_version': '2013-05-23',
'outputs': {'trafficgen_1.yardstick-460ed969': {'description': 'VM UUID',
'value': {'get_resource': 'trafficgen_1.yardstick-460ed969'}},
'trafficgen_1.yardstick-460ed969-fip': {'description': 'floating '
'ip '
JIRA: YARDSTICK-998
Change-Id: Ia8f4e5ba7e280fb9086519680d5ee90a2b442e6b
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
|
|
|
|
|
|
|
|
|
|
Rename create_secgroup_rule with create_security_group_rule.
Function create_security_group_rule now uses shade client.
JIRA: YARDSTICK-890
Change-Id: Ie0ebac67a281e55dc95c0e3e33ba43de80aba9ec
Signed-off-by: Shobhi Jain <shobhi.jain@intel.com>
|
|
|
|
|
|
|
|
Function delete_floating_ip now uses shade client.
JIRA: YARDSTICK-890
Change-Id: I960630926b664266afbe7be00bb1352243b41be0
Signed-off-by: Shobhi Jain <shobhi.jain@intel.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
yardstick_tc090:Control Node Openstack Service High Availability - Database Instances"
|
|
the full path probably isn't matching, so just grep
for the basename
JIRA: YARDSTICK-1096
JIRA: YARDSTICK-1054
Change-Id: I403a7f51310c0856fae0f79d115ba0786b7c417c
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
|
|
|
|
|
|
|
|
JIRA: YARDSTICK-1100
This Error occur when run kuberntes test case:
The Key is the pyopenssl dependency, it seems
YARDSTICK-1032(https://jira.opnfv.org/browse/YARDSTICK-1032) encounter
the same problem. Downgrade requests from 2.18.2 to 2.11.1 would solve
this problem.
Here it is the error log:
Traceback (most recent call last):
File "/usr/lib/python2.7/atexit.py", line 24, in _run_exitfuncs
func(*targs, **kargs)
File
"/home/opnfv/repos/yardstick/yardstick/benchmark/core/task.py",
line 301, in atexit_handler
context.undeploy()
File
"/home/opnfv/repos/yardstick/yardstick/benchmark/contexts/kubernetes.py",
line 63, in undeploy
self._delete_ssh_key()
File
"/home/opnfv/repos/yardstick/yardstick/benchmark/contexts/kubernetes.py",
line 133, in _delete_ssh_key
k8s_utils.delete_config_map(self.ssh_key)
File
"/home/opnfv/repos/yardstick/yardstick/common/kubernetes_utils.py",
line 179, in delete_config_map
**kwargs)
File
"/usr/local/lib/python2.7/dist-packages/kubernetes/client/apis/core_v1_api.py",
line 9059, in delete_namespaced_config_map
(data) =
self.delete_namespaced_config_map_with_http_info(name,
namespace, body, **kwargs)
File
"/usr/local/lib/python2.7/dist-packages/kubernetes/client/apis/core_v1_api.py",
line 9159, in delete_namespaced_config_map_with_http_info
collection_formats=collection_formats)
File
"/usr/local/lib/python2.7/dist-packages/kubernetes/client/api_client.py",
line 321, in call_api
_return_http_data_only, collection_formats,
_preload_content,
_request_timeout)
File
"/usr/local/lib/python2.7/dist-packages/kubernetes/client/api_client.py",
line 155, in __call_api
_request_timeout=_request_timeout)
File
"/usr/local/lib/python2.7/dist-packages/kubernetes/client/api_client.py",
line 387, in request
body=body)
File
"/usr/local/lib/python2.7/dist-packages/kubernetes/client/rest.py",
line 256, in DELETE
body=body)
File
"/usr/local/lib/python2.7/dist-packages/kubernetes/client/rest.py",
line 166, in request
headers=headers)
File
"/usr/local/lib/python2.7/dist-packages/urllib3/request.py",
line 66, in request
**urlopen_kw)
File
"/usr/local/lib/python2.7/dist-packages/urllib3/request.py",
line 87, in request_encode_url
return self.urlopen(method, url, **extra_kw)
File
"/usr/local/lib/python2.7/dist-packages/urllib3/poolmanager.py",
line 321, in urlopen
response = conn.urlopen(method, u.request_uri, **kw)
File
"/usr/local/lib/python2.7/dist-packages/urllib3/connectionpool.py",
line 601, in urlopen
chunked=chunked)
File
"/usr/local/lib/python2.7/dist-packages/urllib3/connectionpool.py",
line 346, in _make_request
self._validate_conn(conn)
File
"/usr/local/lib/python2.7/dist-packages/urllib3/connectionpool.py",
line 850, in _validate_conn
conn.connect()
File
"/usr/local/lib/python2.7/dist-packages/urllib3/connection.py",
line 337, in connect
cert = self.sock.getpeercert()
File
"/usr/local/lib/python2.7/dist-packages/urllib3/contrib/pyopenssl.py",
line 348, in getpeercert
'subjectAltName': get_subj_alt_name(x509)
File
"/usr/local/lib/python2.7/dist-packages/urllib3/contrib/pyopenssl.py",
line 202, in get_subj_alt_name
except (x509.DuplicateExtension, x509.UnsupportedExtension,
AttributeError: 'module' object has no attribute 'UnsupportedExtension'
Change-Id: I444dde829c91defb475e045aea094d74fc43e75b
Signed-off-by: chenjiankun <chenjiankun1@huawei.com>
|
|
|
|
Change-Id: I65770a4a879d890c75a2e5774579794fb9b591f6
Signed-off-by: Chornyi, TarasX <tarasx.chornyi@intel.com>
|
|
|
|
CI is failing due to unable to find loop device for kpartx
"cmd": [
"kpartx",
"-l",
"/tmp/workspace/yardstick/yardstick-xenial-server.raw"
]
"stderr": "mount: could not find any device /dev/loop#Bad address\ncan't set up loop",
This error occurs when kpartx can't find any loop devices to use
https://build.opnfv.org/ci/job/yardstick-compass-virtual-daily-master/3261/console
JIRA: YARDSTICK-1054
JIRA: YARDSTICK-1096
Change-Id: Ib6131ce29c4f9e81386eb5471dd6107825798620
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
|
|
|
|
JIRA: YARDSTICK-1090
We have k8-nosdn-stor4nfv-ha/noha scenario in compass,
so we need to add this files to trigger it.
Change-Id: I79709c53b8542434f7324ad907fa95b4855839d3
Signed-off-by: chenjiankun <chenjiankun1@huawei.com>
|
|
JIRA: YARDSTICK-1098
The reason is missing TasksHandler parameter to run test suite.
Change-Id: I9dd45caa87d0e39afbf7485443a6e566317f5cea
Signed-off-by: chenjiankun <chenjiankun1@huawei.com>
|
|
Change-Id: I999b44cc4e0ec1029c6efca224e691298a007689
Signed-off-by: rexlee8776 <limingjiang@huawei.com>
|
|
|
|
|
|
yardstick_tc091:Control Node Openstack Service High Availability - Heat Api"
|
|
more dummy test fix base on I0ccb7e9fabdf6bc2890d2e4763f53baee06c87b2
Since I1447fb5ed447691eaeb0a97f928c0b3333799d07, the context name is
a mandatory parameter for every context.
JIRA: YARDSTICK-886
Change-Id: I10ee6bcc0507fa90b6e99261a98a96655fc66947
Signed-off-by: rexlee8776 <limingjiang@huawei.com>
|
|
|
|
In NetworkServices Tescases, the TGs (traffic generators) run the traffic in
a separate process. In order to synchronize the traffic injection and the
runner interval loops, an RPC server is needed to publish/subscribe events.
RabbitMQ is a well supported MQ in Linux (used in OpenStack or collectd)
and supported by Python implemented projects like oslo.messaging [1].
RabbitMQ default configuration:
- Port: 5672
- User/password: yardstick/yardstick
[1]https://github.com/openstack/oslo.messaging
JIRA: YARDSTICK-1068
Change-Id: I15db294ee430fb38e574a59b9ce1bf0f8b651a67
Signed-off-by: Rodolfo Alonso Hernandez <rodolfo.alonso.hernandez@intel.com>
|