Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
|
|
|
|
|
|
Change-Id: Idc54fc907dba4603984712fc43a0db8dfd4b7374
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
Change-Id: Ibd159359c6f57d573a909d6841c121c15bf692c1
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
|
|
This is required for influxdb results being reported
"in the future" when the timezones do not match.
Change-Id: Ic41e19d26c46b6ccfa6dacddb595236af19e437a
Signed-off-by: Maciej Skrocki <maciej.skrocki@intel.com>
|
|
just deepcopy and change. This will probably break in
the future if we use anything other than dicts and list.
Change-Id: I9a9b0c5b09b3e3ebd7ed593bf6339ea030605f93
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
Change-Id: I27bcc41c855f34fb1fd0332fc24e7bf0b2af4ec2
Signed-off-by: Deepak S <deepak.s@linux.intel.com>
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
JIRA: YARDSTICK-802
- Updated the BNG code with minor refactor.
- Corrected the CPE core name
- Update binsearch traffic profile with 64B
Change-Id: Iae0be766edb986520045655fa567651711813a8b
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
|
|
JIRA: YARDSTICK-790
Change-Id: I6bb36c98b8673155d3142fc54cfb39315d5ce613
Signed-off-by: qiujuan <juan_qiu@tongji.edu.cn>
|
|
To be used with yardstick/etc/yardstick/nodes/pod.yaml.collectd.sample
Change-Id: I6eff4f6adf57596e06c685ab87b83699696ad7b6
Signed-off-by: Maciej Skrocki <maciej.skrocki@intel.com>
|
|
- vFW
- vCGNAPT
- vACL
- UDP Replay
- vPE (Only OVS supported)
Change-Id: Idbc4d1d6bc1283e40d2fcb9457a871a9198ad147
Signed-off-by: Deepak S <deepak.s@linux.intel.com>
|
|
Change-Id: I2755b596068545c1a3a672ceff47d814a44ae050
Signed-off-by: Deepak S <deepak.s@linux.intel.com>
|
|
Change-Id: I7da2d5bcd7c58c669e28a7271e4c6848c003e84a
Signed-off-by: Deepak S <deepak.s@linux.intel.com>
|
|
2 - node setup:
- Traffic generator starts new stream on both uplink and downlink
This patch addes amsible scripts to enable scale_out testcases
- vfw
Change-Id: I0340636bce3e74cd6175f728b9e7e014a4eb2fd5
Signed-off-by: Deepak S <deepak.s@linux.intel.com>
|
|
|
|
The PROX tests were hanging in the duration
runner.
These are fixes for various errors:
raise error in collect_kpi if VNF is down
move prox dpdk_rebind after collectd stop
fix dpdk nicbind rebind to group by drivers
prox: raise error in collect_kpi if the VNF is down
prox: add VNF_TYPE for consistency
sample_vnf: debug and fix kill_vnf
pkill is not matching some executable names,
add some debug process dumps and try switching
back to killall until we can find the issue
sample_vnf: add default timeout, so we can override
default 3600 SSH timeout
collect_kpi is the point at which we check
the VNFs and TGs for failures or exits
queues are the problem make sure we aren't silently blocking on
non-empty queues by canceling join thread in subprocess
fixup duration runner to close queues
and other attempt to stop duration runner
from hanging
VnfdHelper: memoize port_num
resource: fail if ssh can't connect
at the end of 3600 second test our ssh connection
is dead, so we can't actually stop collectd
unless we reconnect
fix stop() logic to ignore ssh errors
Change-Id: I6c8e682a80cb9d00362e2fef4a46df080f304e55
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
|
|
|
|
|
|
set TRex -c option for threads per port based on
hardware number of queues.
We can't auto-detect number of queues and we can't
use more than one thread per core on systems with single-queue
interfaces, so move the option to the config file
options:
tg_0:
queues_per_port: 2
also enable trex debug by removing >/dev/null redirection
options:
tg_0:
trex_server_debug: true
Change-Id: I46da187849282bf28f4ef5b333e1ae890e202768
Signed-off-by: Deepak S <deepak.s@linux.intel.com>
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
we don't want to block the test waiting to put KPIs
Add moderate timeout. In case we do timeout, it
doesn't matter if we drop intermitten KPIs
Change-Id: I049c785355993e6b286748a5c897d54dd2923dc9
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
|
|
allow manually adding collectd nodes using Node context.
if a node is present with a collectd config dict then
we can create a ResourceProfile object for it
and connect to collectd.
example
nodes:
-
name: compute_0
role: Compute
ip: 1.1.1.1
user: root
password: r00t
collectd:
interval: 5
plugins:
ovs_stats: {}
Change-Id: Ie0c00fdb58373206071daa1fb13faf175c4313e0
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
|
|
|
|
Change-Id: I80aa7e796b9ca4c4881c78310860e293a4a75560
Signed-off-by: Deepak S <deepak.s@linux.intel.com>
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
Empty ssh_options led to ssh login failure, which stopped the whole
script.
Change-Id: I8374a30a02b14d04eb0f623a0c58d7ebed77a589
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
|
|
|
|
|
|
|
|
JIRA: YARDSTICK-802
Updated the handle config for l3fwd 2 port test
The tx and rx descripters is removed as they
were not there in original DATS config.
The BM test was dropping packets beacuse of this.
Change-Id: I40d113267cbb3376a772b5a5aaecf74bea9d06fb
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
|
|
Sometimes the runners can hang. Initially
debugging lead to the queue join thread, so I thought
we could cancel all the join threads and everything would be okay.
But it turns out canceling the queue join threads can lead
to corruption of the queues, so when we go to drain the queues
the task hangs.
But it also turns out that we were not properly draining
the queues in the task process. We were waiting for all
the runners to exit, then draining the queues.
This is bad and will cause the queues to fill up and hang
and/or drop data or corrupt the queues.
The proper fix seems to be to draining the queues in a
loop before calling join with a timeout.
Also modified the queue drain loops to no block on queue.get()
Revert "cancel all queue join threads"
This reverts commit 75c0e3a54b8f6e8fd77c7d9d95decab830159929.
Revert "duration runner: add teardown and cancel all queue join threads"
This reverts commit 7eb6abb6931b24e085b139cc3500f4497cdde57d.
Change-Id: Ic4f8e814cf23615621c1250535967716b425ac18
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
|
|
|
|
|
|
|
|
|
|
In some cases we are blocking in base.Runner join() because the
queues are not empty
call cancel_join_thread to prevent the Queue from blocking the
Process exit
https://docs.python.org/3.3/library/multiprocessing.html#all-platforms
Joining processes that use queues
Bear in mind that a process that has put items in a queue will wait
before terminating until all the buffered items are fed by the
"feeder" thread to the underlying pipe. (The child process can call
the cancel_join_thread() method of the queue to avoid this behaviour.)
This means that whenever you use a queue you need to make sure that
all items which have been put on the queue will eventually be removed
before the process is joined. Otherwise you cannot be sure that
processes which have put items on the queue will terminate. Remember
also that non-daemonic processes will be joined automatically.
Warning
As mentioned above, if a child process has put items on a queue (and
it has not used JoinableQueue.cancel_join_thread), then that process
will not terminate until all buffered items have been flushed to the
pipe.
This means that if you try joining that process you may get a deadlock
unless you are sure that all items which have been put on the queue
have been consumed. Similarly, if the child process is non-daemonic
then the parent process may hang on exit when it tries to join all its
non-daemonic children.
cancel_join_thread()
Prevent join_thread() from blocking. In particular, this prevents the
background thread from being joined automatically when the process
exits – see join_thread().
A better name for this method might be allow_exit_without_flush(). It
is likely to cause enqueued data to lost, and you almost certainly
will not need to use it. It is really only there if you need the
current process to exit immediately without waiting to flush enqueued
data to the underlying pipe, and you don’t care about lost data.
Change-Id: I61f11a3b01109d96b7a5445c60f1e171401157fc
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
calculate timeout once
catch exceptions in benchmark.teardown()
In some cases we are blocking in base.Runner join() because the
queues are not empty
call cancel_join_thread to prevent the Queue from blocking the
Process exit
https://docs.python.org/3.3/library/multiprocessing.html#all-platforms
Joining processes that use queues
Bear in mind that a process that has put items in a queue will wait
before terminating until all the buffered items are fed by the
"feeder" thread to the underlying pipe. (The child process can call
the cancel_join_thread() method of the queue to avoid this behaviour.)
This means that whenever you use a queue you need to make sure that
all items which have been put on the queue will eventually be removed
before the process is joined. Otherwise you cannot be sure that
processes which have put items on the queue will terminate. Remember
also that non-daemonic processes will be joined automatically.
Warning
As mentioned above, if a child process has put items on a queue (and
it has not used JoinableQueue.cancel_join_thread), then that process
will not terminate until all buffered items have been flushed to the
pipe.
This means that if you try joining that process you may get a deadlock
unless you are sure that all items which have been put on the queue
have been consumed. Similarly, if the child process is non-daemonic
then the parent process may hang on exit when it tries to join all its
non-daemonic children.
cancel_join_thread()
Prevent join_thread() from blocking. In particular, this prevents the
background thread from being joined automatically when the process
exits – see join_thread().
A better name for this method might be allow_exit_without_flush(). It
is likely to cause enqueued data to lost, and you almost certainly
will not need to use it. It is really only there if you need the
current process to exit immediately without waiting to flush enqueued
data to the underlying pipe, and you don’t care about lost data.
Change-Id: If7b904a060b9ed68b7def78c851deefca4e0de5d
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
In some cases we are blocking in base.Runner join() because the
queues are not empty
call cancel_join_thread to prevent the Queue from blocking the
Process exit
https://docs.python.org/3.3/library/multiprocessing.html#all-platforms
Joining processes that use queues
Bear in mind that a process that has put items in a queue will wait
before terminating until all the buffered items are fed by the
"feeder" thread to the underlying pipe. (The child process can call
the cancel_join_thread() method of the queue to avoid this behaviour.)
This means that whenever you use a queue you need to make sure that
all items which have been put on the queue will eventually be removed
before the process is joined. Otherwise you cannot be sure that
processes which have put items on the queue will terminate. Remember
also that non-daemonic processes will be joined automatically.
Warning
As mentioned above, if a child process has put items on a queue (and
it has not used JoinableQueue.cancel_join_thread), then that process
will not terminate until all buffered items have been flushed to the
pipe.
This means that if you try joining that process you may get a deadlock
unless you are sure that all items which have been put on the queue
have been consumed. Similarly, if the child process is non-daemonic
then the parent process may hang on exit when it tries to join all its
non-daemonic children.
cancel_join_thread()
Prevent join_thread() from blocking. In particular, this prevents the
background thread from being joined automatically when the process
exits – see join_thread().
A better name for this method might be allow_exit_without_flush(). It
is likely to cause enqueued data to lost, and you almost certainly
will not need to use it. It is really only there if you need the
current process to exit immediately without waiting to flush enqueued
data to the underlying pipe, and you don’t care about lost data.
Change-Id: I345c722a752bddf9f0824a11cdf52ae9f04669af
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
Change-Id: I05cb069984b7674924cfcb1ed023048c0aa0c444
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
Change-Id: I780aa3ea6b04df08baffb5ee5beff66bdc37f37e
Signed-off-by: Deepak S <deepak.s@linux.intel.com>
|
|
|
|
|
|
|
|
|