Age | Commit message (Collapse) | Author | Files | Lines |
|
JIRA: YARDSTICK-755
There is a history problem that iperf use udp to set a net protocol.
This code will change it to protocol.
so you could use 'tcp','udp' and other protocol.
Change-Id: I1a101013dfe58165a3ed08aa77f0aa2f73d57a12
Signed-off-by: Ace Lee <liyin11@huawei.com>
(cherry picked from commit 10f85b332c4b1f55e651aeb9c45b328e1ebdc2af)
|
|
JIRA: YARDSTICK-790
Change-Id: I6bb36c98b8673155d3142fc54cfb39315d5ce613
Signed-off-by: qiujuan <juan_qiu@tongji.edu.cn>
|
|
The PROX tests were hanging in the duration
runner.
These are fixes for various errors:
raise error in collect_kpi if VNF is down
move prox dpdk_rebind after collectd stop
fix dpdk nicbind rebind to group by drivers
prox: raise error in collect_kpi if the VNF is down
prox: add VNF_TYPE for consistency
sample_vnf: debug and fix kill_vnf
pkill is not matching some executable names,
add some debug process dumps and try switching
back to killall until we can find the issue
sample_vnf: add default timeout, so we can override
default 3600 SSH timeout
collect_kpi is the point at which we check
the VNFs and TGs for failures or exits
queues are the problem make sure we aren't silently blocking on
non-empty queues by canceling join thread in subprocess
fixup duration runner to close queues
and other attempt to stop duration runner
from hanging
VnfdHelper: memoize port_num
resource: fail if ssh can't connect
at the end of 3600 second test our ssh connection
is dead, so we can't actually stop collectd
unless we reconnect
fix stop() logic to ignore ssh errors
Change-Id: I6c8e682a80cb9d00362e2fef4a46df080f304e55
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
Change-Id: I05cb069984b7674924cfcb1ed023048c0aa0c444
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
When an IP range is specified in src_ip/dst_ip like:
src_ip:
- '152.16.100.180-152.16.100.181'
yardstick would return str object has no attribute items error.
This change will return the IP range as is if type is str.
Change-Id: I3b097777f0d85b0600207157bebba18987ea2275
Signed-off-by: Dino Simeon Madarang <dinox.madarang@intel.com>
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
Also rename private to uplink, public to downlink
for scale-out template we need to count from 0
so we can use range() without +1/-1 errors
vnf_0, vnf_1
tg_0, tg_1
also fix Ixia defaults
Change-Id: I6aecfbb95f99af20f012a9df19c19be77d1b5b77
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Edward MacGillivray <edward.s.macgillivray@intel.com>
|
|
Add a new PortPair class to resolve the
topology into list of public and private ports.
Before we were calculating public/private in multiple
locations and using different conventions.
In addition for all the DPDK test we need to use the DPDK
port number and no rely on interface ordering or interface naming
conventions.
We used to use xe0 -> 0, xe1 -> 1, etc. This is not the DPDK port
number.
Use the new dpdknicbind_helper class to parse the output of
dpdk-devbind.py to find the actual DPDK port number at runtime.
We then use this DPDK port number to correctly calculate the
port_mask_hex.
The port mask maps the DPDK port num (PMD ID) to the LINK ID
used in the pipeline config
We also need to make sure we only use the interfaces matched to the
topology and not use all the interfaces, because in some cases we will
have unused interfaces. In particular TRex always requires an even
number of interfaces, so for single port TRex tests we have to create
the second port and not use it.
Thus we had to modify the traffic generator stats code to only dump
stats for used ports and no unused ports.
Ixia was using interface ordering to map to Ixia ports, instead we use
the dpdk_port_num which must be hardcoded for Ixia.
Renamed traffic_profile.execute to traffic_profile.execute_traffic so
we can trace the code easier.
We pass the port used by the traffic profile to generate_samples so we
don't get stats for unused ports.
Fixed up vPE config creation and bring up issues.
Fixed up CGNAPT and UDP_Replay to work correctly.
Tested with 4-port scale-out
Change-Id: I2e4f328bff2904108081e92a4bf712333fa73869
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Edward MacGillivray <edward.s.macgillivray@intel.com>
|
|
|
|
|
|
Change-Id: I5c039c0d4f4ba651209c7d5ca4e748f9151b5630
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
If you put time.sleep(1) all over your code you need
to mock time.sleep() in your unittests lest the unittests
take forever.
Change-Id: I9ebbf9e21c98e8c46bab727bbb22f33045db4361
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
Change-Id: I3ec1a6d3710d44df5ddac6bd8967d28ad58e8d33
Signed-off-by: Deepak S <deepak.s@linux.intel.com>
|
|
ha test cases didn't store moniter info and report
fail when sla didn't pass
Change-Id: I0e5637e37a66e1bf03b47fe09d17e0a1acfa11c1
Signed-off-by: rexlee8776 <limingjiang@huawei.com>
|
|
If we have a /32 or for some reason find a range of IPs
to use, we can default to the single IP specified on the interface.
Change-Id: Ieaa1d57b04e1d57e8cef344d5a53bbca05e7887f
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
|
|
|
|
Change-Id: Ia3677724075c1c1408f50bbfcebd3cbcde251d66
Signed-off-by: JingLu5 <lvjing5@huawei.com>
|
|
JIRA: YARDSTICK-781
This patch adds some common openstack opertation scenarios
Change-Id: Id436a201aa04f8f6b98576e8fbf599ca3654827c
Signed-off-by: JingLu5 <lvjing5@huawei.com>
|
|
|
|
- path should be defined via TREX_CLIENT_LIB environmental variable e.g. TREX_CLIENT_LIB=/opt/trex_client/stl
- refactored unit tests
Change-Id: I18767e48daf774432c010f1b88d18a4f0ee4e156
Signed-off-by: Maciej Skrocki <maciej.skrocki@intel.com>
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
|
|
|
|
JIRA: YARDSTICK-781
This patch adds some common openstack opertation scenarios
Change-Id: I1300a61b389202242f112b6d280ab47746379546
Signed-off-by: JingLu5 <lvjing5@huawei.com>
|
|
JIRA: YARDSTICK-791
In some use cases, Fio is used with a job file instead of parameters.
This work is about adding support for the job file and add a new test case
for volume testing.
Change-Id: I312d61bf6e7d95f23eedb0b6487f6103b7d76355
Signed-off-by: JingLu5 <lvjing5@huawei.com>
|
|
|
|
This patch adds the framesize and flow into test option instead of
adding a seperate file to avoid the multiple file update incase of ip
change.
Change-Id: Ic473c73773ad36422ecc02618b8c646a5336b70a
Signed-off-by: Deepak S <deepak.s@linux.intel.com>
|
|
Change-Id: I6649960dc9fbc61c22c9b7434805fc335634960b
Signed-off-by: rexlee8776 <limingjiang@huawei.com>
|
|
need to use create=True with mock open anyway
Change-Id: I3a35688cf8c367434db9d0cf057030d49deddd0d
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
JIRA: YARDSTICK-781
This patch adds some common openstack opertation scenarios
Change-Id: Ie59f0d5ae0842f8347824c961436b889a95b1a72
Signed-off-by: JingLu5 <lvjing5@huawei.com>
|
|
|
|
Change-Id: Ic1f13c0d28c1a1b01bbf3c8a6a618a5b3ab5bbeb
Signed-off-by: Deepak S <deepak.s@linux.intel.com>
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
There are multiple issues wiht YAML loading.
1. Jinja2 renders None values as a string 'None'. This is not valid YAML
we need to render None values to '~' or 'null' which is the native YAML
None value.
2. Jinja2 renders dict and lists that contain unicode with
u'foo' values. This is not value YAML syntax.
Because we are serializing dict and lists into YAML, we
need to encode them as valid YAML. We can override Jinja2 finalize to
use yaml.dump to dump inline YAML.
We use yaml.safe_dump(elem, default_flow_style=True).replace('\n', '')
to generate valid single-line YAML dict and list values.
But this problem highlights the general difficulties with templating and
loading files.
We could avoid this Python->Jinja2->YAML->Python issue by directly
injecting the list or dict after the YAML is loaded.
I'm not sure of the real utility of these templates.
3. On Python 2 YAML loader is rendering all strings
as unicode. This does not work for Trex because Trex is broken
and badly coded. Trex does type checking against str() which
is different for Python 2 and Python 3.
The default YAML loader will return native string types, str() or unicode()
for Python 2 and Python 3 respectively.
The bad Trex codes is in convert_val:
https://github.com/cisco-system-traffic-generator/trex-core/blob/master/scripts/automation/trex_control_plane/stl/trex_stl_lib/trex_stl_packet_builder_scapy.py#L674
def convert_val (val):
if is_integer(val):
return val
if type(val) == str:
return ipv4_str_to_num (is_valid_ipv4(val))
raise CTRexPacketBuildException(-11,("init val invalid %s ") % val );
This code is doing type(val) == str. This is bad and broken.
We can't fix Trex, so we have to render all strings as native str() types
The bug here was that the Heat template loader template_format.py
was overriding the global YAML loader to always return unicode.
We don't want this global override.
To fix this we have to use local subclasses of the yaml.SafeLoader
class.
But in order to dynamically subclass from CSafeLoader or SafeLoader
we have to use the type() builtin to define a new class at runtime.
Once we have new classes defined, we can safely isolate different
YAML constructors and return unicode or not depending on the case.
To be consistent we implement a new yaml_loader.py module to centralize
all non-Heat template yaml loading to ensure correct uncode/str
conversion
Change-Id: Iebf9cf78fbda390977c390436b0869e7bbf503eb
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Deepak S <deepak.s@linux.intel.com>
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
Problem:
Running Vsperf in Tgen mode is supported but the integration is not complete at the code level
i.e. not ready-to-use, and dpdk loopback is not supported inside the VM.
Solution:
(1) Completely automates VM image generation and supports 1G huge pages.
(2) Adds a new test scenario VsperfDPDK for testpmd based loopback inside the VM.
Update 1-2: Fixed "line too long" issues not reported by local run_tests.sh (why?)
Update 3: Per comment change to use SSH.from_node() and add unit test cases
Update 4: Add more unit test cases for coverage and ready the code for merge
JIRA: YARDSTICK-661
Change-Id: Iea3014d4c83e1b0c079019a4ed27771d40a7eed8
Signed-off-by: Jing Zhang <jing.c.zhang@nokia.com>
|
|
|
|
JIRA: YARDSTICK-781
This patch adds some common openstack opertation scenarios
Change-Id: I854fc435a5c951245a5997cd4e3e63c5162030af
Signed-off-by: JingLu5 <lvjing5@huawei.com>
|
|
|
|
|
|
JIRA: YARDSTICK-771
Change-Id: Ibcd2228505d341feb09b0d477e5f4ed6062c1e89
Signed-off-by: rexlee8776 <limingjiang@huawei.com>
|
|
|
|
JIRA: YARDSTICK-781
This patch adds some common openstack opertation scenarios
Change-Id: I3de7dbb30eaebac4feebcf07dd6a0d2bdcf428d9
Signed-off-by: JingLu5 <lvjing5@huawei.com>
|
|
JIRA: YARDSTICK-781
This patch adds some common openstack opertation scenarios
Change-Id: I9e84a8894fe9b9c1754a45a0ddfdf93739164b9a
Signed-off-by: JingLu5 <lvjing5@huawei.com>
|
|
Refactored main NSB VNF classes accroding to class diagram
https://wiki.opnfv.org/display/yardstick/NSB+class+diagram
All the SampleVNFs have been separated and placed under
the SampleVNF class.
Added AutoConnectSSH to automatically create SSH conneciton on demand.
Added VnfdHelper class to wrap the VNFD dictionary in prepartion for
class-based modeling.
Extracted DpdkVnfSetupEnvHelper for DPDK based VNF setup.
Extracted Stats and other client config to ResourceHelper
Had to replace dict_key_flatten with deepgetitem due to Python 2.7
Jinja2 infinite recursion.
Change-Id: Ia8840e9c44cdbdf39aab6b02e6d2176b31937dc9
Signed-off-by: Deepak S <deepak.s@linux.intel.com>
Signed-off-by: Edward MacGillivray <edward.s.macgillivray@intel.com>
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
|
|
JIRA: YARDSTICK-770
Bonnie++ is a disk and file system benchmarking tool for measuring I/O performance.
With Bonnie++ you can quickly and easily produce a meaningful value to represent
your current file system performance.
This work is add new storage test case using Bonnie++.
Change-Id: I752fee156707cda730962c68d17fda4d4e9cd472
Signed-off-by: JingLu5 <lvjing5@huawei.com>
|
|
|
|
apexlake is unmaintained, so remove it
From some reason orchestrator/heat.py started failing
so fixup those unittests
Change-Id: Ie06508b5ab7c9dcf9fdfca83e173a188a894d564
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
|
|
JIRA: YARDSTICK-764
This work is about support run SPEC CPU2006 benchmark.
users must get a "cpu2006-1.2.iso" from the SPEC website,
save it under the /home/opnfv/yardstick/yardstick/resources folder
(e.g. /home/opnfv/yardstick/yardstick/resources/cpu2006-1.2.iso),
user may also supply a runspec cfg file
(e.g. /home/opnfv/yardstick/yardstick/resources/files/yardstick_spec_cpu2006.cfg).
Change-Id: If4aecc1c14635a07589555196d2edc8bd37d7bdb
Signed-off-by: JingLu5 <lvjing5@huawei.com>
|
|
|
|
JIRA: YARDSTICK-174
This live migration test case is based on share storage, default share
storage is enabled.
This test case will do some config work. And do live migration and
calculate the migration time and downtime.
Change-Id: I6601601edebdd0ac6434ba632b1eba9e9bd4fda0
Signed-off-by: chenjiankun <chenjiankun1@huawei.com>
|