Age | Commit message (Collapse) | Author | Files | Lines |
|
by default ConfigParser will lowercase everything,
unless you override optionxform.
also sort key value in inventory line for consistency
https://docs.python.org/3/library/configparser.html#configparser.ConfigParser.optionxform
Transforms the option name option as found in an input file or as passed in by
client code to the form that should be used in the internal structures. The
default implementation returns a lower-case version of option; subclasses may
override this or client code can set an attribute of this name on instances to
affect this behavior.
You don’t need to subclass the parser to use this method, you can also set it
on an instance, to a function that takes a string argument and returns a
string. Setting it to str, for example, would make option names case sensitive:
cfgparser = ConfigParser()
cfgparser.optionxform = str
Note that when reading configuration files, whitespace around the option names
is stripped before optionxform() is called.
YARDSTICK-833
Change-Id: Ia1810b0c77922d84e11c9e538540b38816338593
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
(cherry picked from commit 3e93bb8ff3ef9ff454d6be13295198dbeac75df7)
|
|
|
|
stable/euphrates
|
|
|
|
JIRA: YARDSTICK-848
The NSB PROX MPLS test uses Binsearch traffic
profile and the mpls traffic profile is a duplicate.
Change-Id: Ie2124cebf306fd6917b70ecd7c23ae12ef4850dc
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
(cherry picked from commit 1b9cc8a38a4866797bd49d006e22607b348f42ac)
|
|
when we create TRex config we sort based on PCI bus address
and create a logical port ordering.
We need to save this port ordering and re-use it everywhere.
redirect vnfd_helper.port_num() to resource_helper.port_num() to
use the logical mapping
Change-Id: Ibff628556d5e11e686e15716a66a3210758c4ff0
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
(cherry picked from commit ce52059f5f78912eeff2d97235c1028c218bf960)
|
|
JIRA: YARDSTICK-802
Addition of PROX LW_AFTR basked on PROX/DATS v037 test_104_lw_aftr.py
- This support BM and Openstack Heat
- This supports 4 Ports ONLY
- Grafana Dashboards included
- Code Coverage / Unit testing
Change-Id: If2170ab458bf687256d5f1a1e840a3b9d2788ef7
Signed-off-by: Daniel MArtin Buckley <daniel.m.buckley@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
(cherry picked from commit b9e394b2f0955c76f883021c4f65c136b80d9261)
|
|
Users would like to ensure placement of VMs on specific compute nodes so
that the measurements are meaningful. Examples: Measure network
performance in different scenarios (VMs in same host, in different
hosts, across fabric, across tenants)
Example:
context:
name: yardstick
placement_groups:
pgrp1:
policy: "availability"
servers:
tg_0:
floating_ip: true
placement: "pgrp1"
availability_zone: "zone2"
vnf_0:
floating_ip: true
placement: "pgrp1"
availability_zone: "zone1"
Change-Id: I28a757c25ae3f5b3571ab3edd82d51ceba32c302
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
(cherry picked from commit 81b9d338268f47f3d8863f10ef3940f0ea79d618)
|
|
|
|
For some L2/L3 DPDK testcases we need to use a custom
IP address space different from what Heat provides.
These testcases require port_security_enabled = False so
Neutron should allow for unrestricted L2 traffic.
This will work because we bind the ports to DPDK and thus
don't need DHCP.
vnf_0:
floating_ip: true
placement: "pgrp1"
network_ports:
mgmt:
- mgmt
uplink_0:
- xe0:
local_ip: 10.44.0.20
netmask: 255.255.255.0
downlink_0:
- xe1:
local_ip: 10.44.0.30
netmask: 255.255.255.0
Also fixup flake8 errors in unittests
Change-Id: Id29dfffa692f16fb1f526d208db43e476e2f7830
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
(cherry picked from commit ec6a90d449f8b1ab2b17083188ec65f75ab7818b)
|
|
This patch consists of reverting the changes of patch 45227 and
incudes redirecting the console output of the LiveMigration
execution to /dev/null as the stdout contains only the statistics,
i.e., totaltime, downtime and setuptime.
This reverts commit 5a1f65d3e7d67488ee6f558dccfa5ca5581ddb65.
Change-Id: I252b5a4045657cfa8362e9aae755249480cd3b77
Signed-off-by: Navya <navyax.bathula@intel.com>
(cherry picked from commit 3ca70b916c386b7ec4d9a7f2f9bb6fec2e917785)
|
|
This patch is used to remove the livemigration testcase result loaded
to json as there is no dashboard implemented for the testcase
Change-Id: I7a9589a0bbc5f2a28587c2878da042fc50af18e0
Signed-off-by: Navya Bathula <navyax.bathula@intel.com>
(cherry picked from commit 5a1f65d3e7d67488ee6f558dccfa5ca5581ddb65)
|
|
intel_pmu needs to download a config file based on the
CPU model. When generating VNF images we don't have
access to the actual vCPU that will be used, so we
can't predownload. This code was meant to be a fix
for that by downloading all the configs and then
selecting one that matched the vCPU.
However we have license issues with intel_pmu enven GPLv2 code,
so remove it for now.
Change-Id: I5257ff7c4ddc1d40537dadb29efa40d1d68cb852
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
(cherry picked from commit 7a5c45daa9b146dfc50068165aba5ec6bc2e1e2c)
|
|
Removed the abs function which can potentially mask
negative dropped packets.
Dropped packets in Prox workload VNF = max((tx_packets - rx_packets), 0)
Change-Id: I510a351e899cdf9a1f366d632b9f0528b1d9dcce
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
(cherry picked from commit a27278dacaa54ae60cd3bdfa6e6145643f76fa02)
|
|
Change-Id: I9d246828790467c2a57ba410826ee9751fff89c5
Signed-off-by: JingLu5 <lvjing5@huawei.com>
(cherry picked from commit 4712d72a570dc9e2799227d489ee41768881a06d)
|
|
Change-Id: Ie770ca69ebdc66589ed6ca5c25bfc9a75afb8938
Signed-off-by: Deepak S <deepak.s@linux.intel.com>
|
|
Change-Id: If2e079966939b7faa33d2833d81caad0a3669036
Signed-off-by: Deepak S <deepak.s@linux.intel.com>
|
|
Change-Id: I1f457c9c24f2ca84dde61b64f58edaff8952670a
Signed-off-by: Edward MacGillivray <edward.s.macgillivray@intel.com>
|
|
switch from hardcoded path to dynamic path
based on bin_path
also enable proxy for install_collectd
add barometer settings for virt and ovs_stats
Change-Id: Id138aef548332a3e3fcb3963b746e7c9f10c0948
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
Change-Id: I8674caa15c9fc32cfacb17f558da5fb31094877e
Signed-off-by: Deepak S <deepak.s@linux.intel.com>
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
JIRA: YARDSTICK-755
There is a history problem that iperf use udp to set a net protocol.
This code will change it to protocol.
so you could use 'tcp','udp' and other protocol.
Change-Id: I1a101013dfe58165a3ed08aa77f0aa2f73d57a12
Signed-off-by: Ace Lee <liyin11@huawei.com>
(cherry picked from commit 10f85b332c4b1f55e651aeb9c45b328e1ebdc2af)
|
|
JIRA: YARDSTICK-802
Addition of Prox vPE test case
- The tests supports BM, Openstack Heat
- Supports 4 ports
- Grafana dashboards included
- Added support for parameters.lua
for prox additional files
- Unit tests for code coverage
Change-Id: I5cccb351dacba88a293ae4b8aba1f0a803d62e6d
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel MArtin Buckley <daniel.m.buckley@intel.com>
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
sometime Jenkins fails due to what I guess are concurrency problems
in os.environ mock
======================================================================
FAIL: tests.unit.benchmark.core.test_task.TaskTestCase.test_parse_suite_with_constraint_no_args
tags: worker-10
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python3.5/unittest/mock.py", line 1157, in patched
return func(*args, **keywargs)
File "/home/jenkins/opnfv/slave_root/workspace/yardstick-verify-euphrates/tests/unit/benchmark/core/test_task.py", line 208, in test_parse_suite_with_constraint_no_args
task_files, task_args, task_args_fnames = t.parse_suite()
File "/home/jenkins/opnfv/slave_root/workspace/yardstick-verify-euphrates/yardstick/benchmark/core/task.py", line 455, in parse_suite
cur_pod = os.environ.get('NODE_NAME', None)
File "/usr/lib/python3.5/unittest/mock.py", line 917, in __call__
return _mock_self._mock_call(*args, **kwargs)
File "/usr/lib/python3.5/unittest/mock.py", line 976, in _mock_call
result = next(effect)
StopIteration
Ran 1262 tests in 2.375s
FAILED (id=0, failures=1)
error: testr failed (1)
+ testr failing
Replace the mock decorator with a context manager to try to
reduce the scope and duration of the mock.
Change-Id: I342fe6c403e66c53ac4c39fd88fa9047cdfae5d9
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
(cherry picked from commit 2fadf7aec9e2761c39d29d8af1ee7d69d154652d)
|
|
Change-Id: Ibd159359c6f57d573a909d6841c121c15bf692c1
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
Change-Id: I27bcc41c855f34fb1fd0332fc24e7bf0b2af4ec2
Signed-off-by: Deepak S <deepak.s@linux.intel.com>
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
JIRA: YARDSTICK-790
Change-Id: I6bb36c98b8673155d3142fc54cfb39315d5ce613
Signed-off-by: qiujuan <juan_qiu@tongji.edu.cn>
|
|
The PROX tests were hanging in the duration
runner.
These are fixes for various errors:
raise error in collect_kpi if VNF is down
move prox dpdk_rebind after collectd stop
fix dpdk nicbind rebind to group by drivers
prox: raise error in collect_kpi if the VNF is down
prox: add VNF_TYPE for consistency
sample_vnf: debug and fix kill_vnf
pkill is not matching some executable names,
add some debug process dumps and try switching
back to killall until we can find the issue
sample_vnf: add default timeout, so we can override
default 3600 SSH timeout
collect_kpi is the point at which we check
the VNFs and TGs for failures or exits
queues are the problem make sure we aren't silently blocking on
non-empty queues by canceling join thread in subprocess
fixup duration runner to close queues
and other attempt to stop duration runner
from hanging
VnfdHelper: memoize port_num
resource: fail if ssh can't connect
at the end of 3600 second test our ssh connection
is dead, so we can't actually stop collectd
unless we reconnect
fix stop() logic to ignore ssh errors
Change-Id: I6c8e682a80cb9d00362e2fef4a46df080f304e55
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
set TRex -c option for threads per port based on
hardware number of queues.
We can't auto-detect number of queues and we can't
use more than one thread per core on systems with single-queue
interfaces, so move the option to the config file
options:
tg_0:
queues_per_port: 2
also enable trex debug by removing >/dev/null redirection
options:
tg_0:
trex_server_debug: true
Change-Id: I46da187849282bf28f4ef5b333e1ae890e202768
Signed-off-by: Deepak S <deepak.s@linux.intel.com>
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
allow manually adding collectd nodes using Node context.
if a node is present with a collectd config dict then
we can create a ResourceProfile object for it
and connect to collectd.
example
nodes:
-
name: compute_0
role: Compute
ip: 1.1.1.1
user: root
password: r00t
collectd:
interval: 5
plugins:
ovs_stats: {}
Change-Id: Ie0c00fdb58373206071daa1fb13faf175c4313e0
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
Sometimes the runners can hang. Initially
debugging lead to the queue join thread, so I thought
we could cancel all the join threads and everything would be okay.
But it turns out canceling the queue join threads can lead
to corruption of the queues, so when we go to drain the queues
the task hangs.
But it also turns out that we were not properly draining
the queues in the task process. We were waiting for all
the runners to exit, then draining the queues.
This is bad and will cause the queues to fill up and hang
and/or drop data or corrupt the queues.
The proper fix seems to be to draining the queues in a
loop before calling join with a timeout.
Also modified the queue drain loops to no block on queue.get()
Revert "cancel all queue join threads"
This reverts commit 75c0e3a54b8f6e8fd77c7d9d95decab830159929.
Revert "duration runner: add teardown and cancel all queue join threads"
This reverts commit 7eb6abb6931b24e085b139cc3500f4497cdde57d.
Change-Id: Ic4f8e814cf23615621c1250535967716b425ac18
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
https://jira.opnfv.org/browse/YARDSTICK-773?filter=-3
Remove dependency of yardstick on utils methods
Change-Id: Iadf502364a7f08c279a8f0d17d7e45e8047f4066
Signed-off-by: Edward MacGillivray <edward.s.macgillivray@intel.com>
|
|
Change-Id: I05cb069984b7674924cfcb1ed023048c0aa0c444
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
new context names:
- SRIOV - StandaloneSriov
- OvsDpdk - StandaloneOvsDpdk
- Seperate helper, libvirt, server info class
- Allow multi-port and multi-VM support.
Change-Id: I3c65e4535082fa0e2f4c6ee11c3bca9ccfdc01b8
Signed-off-by: Deepak S <deepak.s@linux.intel.com>
Signed-off-by: Martin Banszel <martinx.banszel@intel.com>
|
|
Change-Id: I031cc7f24f0c0816eb577a4d1606a714f68a5f83
Signed-off-by: Deepak S <deepak.s@linux.intel.com>
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
When an IP range is specified in src_ip/dst_ip like:
src_ip:
- '152.16.100.180-152.16.100.181'
yardstick would return str object has no attribute items error.
This change will return the IP range as is if type is str.
Change-Id: I3b097777f0d85b0600207157bebba18987ea2275
Signed-off-by: Dino Simeon Madarang <dinox.madarang@intel.com>
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
JIRA: YARDSTICK-802
Added Prox BNG and BNG-QoS Test
- The tests supports BM, Openstack Heat
- Supports 4 ports
- Test added for BNG traffic profile
- Fixed the Prox heat test cases with
proper upstream and downstream links
- Grafana Dashboard for BNG & BNG-QoS added
- Increased the test Duration to 300
TODO:
- Test does not Terminate correctly
Update:
Added new helper class for run_test: Genric, MPLS
and BNG tests.
Change-Id: Ib40811bedb45a3c3030643943f32679a4044e076
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Edward MacGillivray <edward.s.macgillivray@intel.com>
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
|
|
We have the collectd.conf inside the python package
so instead of copying it from various places,
write the template directly to the remote system.
collectd: read collect.conf template with pkgresources
read the collectd.conf file as a string directly
and upload without creating temp file
use Jinja2 template, disable failing plugins
use proper Jinja2 template, disable the plugins that
were failing to load and blocking startup
add support for per-testcase collectd.conf config
using YAML
add support for custom interval, default is 25 seconds
Change-Id: Id904f7b7c9f41a9dd7adf5dfa06c064d65c25d2d
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
Change-Id: Ic8aa130f3cdc7bd8dec39d06a6b824340bf658b2
Signed-off-by: Deepak S <deepak.s@linux.intel.com>
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
Change-Id: Ia934128777d2839f6d2b940857c266fc3e2bd4a1
Signed-off-by: Deepak S <deepak.s@linux.intel.com>
|
|
Kubernetes running node when creating containers for
Kubernetes context
For example, a yaml file may looks like:
servers:
host:
image: xxx
command: /bin/bash
nodeSelector:
xxx: yyy
Synchronously change the unit test for this function
Change-Id: If74c9dad9b1a70395bb79f34708a0fde04e7e650
Signed-off-by: Trevor Tao <trevor.tao@arm.com>
|
|
Change-Id: Ia9722604b7c8ae23e784e780f113d012de544d4b
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
JIRA: YARDSTICK-803
Currently kubernetes test case can only run in master node.
We need to support it run in jump server.
So I add service and use nodePort type.
Then we can login the pod using nodePort.
Change-Id: Ia7900d263f1c5323f132435addec27ad10547ef9
Signed-off-by: chenjiankun <chenjiankun1@huawei.com>
|
|
we were using raw sort index of the interfaces to
set the MAC address, but we should be using the
traffic id from the static JSON instead.
Change-Id: I13284db04abb3eaf8c9826974a9e5aa1c37b3891
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
Change-Id: I26957977e6dcd0392078a543a6907a550711c702
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
The problem is that we share the same ProxResourceHelper
for both VNF and TG.
For VNF we want to talk to resource.py and get collectd KPIs.
For TG we need to read from the queue the TG calculated KPIs and
we also want collectd KPIs.
workaround is to use a different method name collect_collectd_kpi
for VNFs
Change-Id: Icc2132758e37ce210f5600a0cd433077930208e5
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
JIRA: YARDSTICK-802
Addition of PROX L2FWD_Multiflow, ACL, Load Balancing plus
grafana dashboards
Supports 2 and 4 port Baremetal & Heat
Change-Id: I1f3990d5451de265ee3901302569c355ece3b146
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
|
|
the prox files were being found correctly.
if we use find_relative_file they will lookup
relative to the task_path
Change-Id: Ifde5d07df5ccfbfeba015b2f43bd8b53e89a00b7
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
we generate the prox_config_dict in the _run Process,
but we also need it in the _traffic_runner Process to
get core info.
use a queue to pass the config list between the processes
enable collect_kpi
Change-Id: Ibaf41d606e559a87addf43d6ddaed206dbd2d20c
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Edward MacGillivray <edward.s.macgillivray@intel.com>
|
|
Instead of using a key_filename for Heat, we can
read the key as a string directly using pkg_resources.resource_string()
This will enable us to save Heat stacks as pod.yaml, because
we can embedded the key into the pod.yaml directly.
Change-Id: I16baaba17dab845ee0846f97678733bae33cb463
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Edward MacGillivray <edward.s.macgillivray@intel.com>
|
|
Also rename private to uplink, public to downlink
for scale-out template we need to count from 0
so we can use range() without +1/-1 errors
vnf_0, vnf_1
tg_0, tg_1
also fix Ixia defaults
Change-Id: I6aecfbb95f99af20f012a9df19c19be77d1b5b77
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Edward MacGillivray <edward.s.macgillivray@intel.com>
|