Age | Commit message (Collapse) | Author | Files | Lines |
|
Tool provisioning in PROX setup environment helper is not needed [1]. The
tool (PROX traffic injector) is already provided during the VNF building:
./ansible/nsb_setup.yml -->
./ansible/build_yardstick_image.yml -->
./ansible/ubuntu_server_cloudimg_modify_samplevnfs.yml -->
./ansible/roles/install_samplevnf
[1]https://github.com/opnfv/yardstick/blob/master/yardstick/network_services/vnf_generic/vnf/prox_helpers.py#L641
JIRA: YARDSTICK-872
Change-Id: I0f925a7967a35a97901fbe5053793a791a7b1b01
Signed-off-by: Rodolfo Alonso Hernandez <rodolfo.alonso.hernandez@intel.com>
|
|
GenericVNF class is now an abstract class. Only optional methods are
implemented. Mandatory methods:
- instantiate
- wait_for_instantiate
- terminate
- scale
- collect_kpi.
JIRA: YARDSTICK-866
Change-Id: Ia8766f9f98816e11894d1e72b0f3bd573d091d99
Signed-off-by: Rodolfo Alonso Hernandez <rodolfo.alonso.hernandez@intel.com>
|
|
Moving variable setting before reading
Verified via TC:
yardstick/samples/vnf_samples/nsut/ping/tc_ping_heat_context.yaml
JIRA: YARDSTICK-899
Change-Id: Ia2feac1ed4e67dccd02446ba27afc9d40e87be35
Signed-off-by: Jiri Prokes <jirix.x.prokes@intel.com>
|
|
A generic throughput test case that can be used as a
stub code for a Linux-based VNF configured as an L3 forwarder.
Supported context:
* Standalone OVSDPDK and SRIOV
* Baremetal
Code changes:
* Allow pmd-cpu-mask and lcore mask for OVS DPDK
* router_vnf.py - configures interface IP addresses and static arp entries
using ip command
* NFVi KPIs
* Allow cputune tag for standalone context to be able to PIN on NUMA 1 cpus
SRIOV Test cases:
* RFC2544 Ethernet framesizes, 128K Flows
* 2,4 and 6 ports
* 2 and 3 vcpus per port
*
OVSDPDK Test cases:
* RFC2544 Ethernet framesizes, 128K Flows
* 2 and 4 ports
* 2 vcpus per port
* 2 PMD threads per port
TODO:
* Documentation
* Add 6 ports tests
References:
* router_vnf.py is based on sample_vnf.py
* tc_*.yaml files are based on acl/vfw test case files
Added unitests
Added get_stats to parse ip -s link
Change-Id: Id1b969d5420dfcab7c1e695acbd2cd1655747efe
Signed-off-by: Dino Simeon Madarang <dinox.madarang@intel.com>
Signed-off-by: Kavindya Deegala <kavindya.s.deegala@intel.com>
Reviewed-by: Alain Jebara <alain.jebara@intel.com>
Reviewed-by: Deepak S <deepak.s@linux.intel.com>
Reviewed-by: Emma Foley <emma.l.foley@intel.com>
Reviewed-by: Rodolfo Alonso Hernandez <rodolfo.alonso.hernandez@intel.com>
Reviewed-by: Ross Brattain <ross.b.brattain@intel.com>
Reviewed-by: Edward MacGillivray <edward.s.macgillivray@intel.com>
Signed-off-by: Dhaval Patel <dhaval.r.patel@intel.com>
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
GenericTrafficGen class is now an abstract class. Only optional methods
are implemented.
- 'run_traffic' and 'terminate' are mandatory.
- 'listen_traffic', 'verify_traffic' and 'wait_for_instance' are
optional. By default these methods doesn't execute any action.
JIRA: YARDSTICK-853
Change-Id: I2befdaa337af79cc2364bdd7c66183c31c5ab69a
Signed-off-by: Rodolfo Alonso Hernandez <rodolfo.alonso.hernandez@intel.com>
|
|
when we create TRex config we sort based on PCI bus address
and create a logical port ordering.
We need to save this port ordering and re-use it everywhere.
redirect vnfd_helper.port_num() to resource_helper.port_num() to
use the logical mapping
Change-Id: Ibff628556d5e11e686e15716a66a3210758c4ff0
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
network_services.vnf_generic.vnf.prox_helpers.ProxSocketHelper.rx_stats"
|
|
|
|
* Remove the method which is unused and is marked as deprecated.
Change-Id: Ie64084fcd26985283c664445b173a757d3d908ab
JIRA: YARDSTICK-838
Signed-off-by: Emma Foley <emma.l.foley@intel.com>
|
|
JIRA: YARDSTICK-802
Addition of PROX LW_AFTR basked on PROX/DATS v037 test_104_lw_aftr.py
- This support BM and Openstack Heat
- This supports 4 Ports ONLY
- Grafana Dashboards included
- Code Coverage / Unit testing
Change-Id: If2170ab458bf687256d5f1a1e840a3b9d2788ef7
Signed-off-by: Daniel MArtin Buckley <daniel.m.buckley@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
|
|
Change-Id: Id8aa734fee431d90cbdc1e0eb2173784ada822fe
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
We don't really know what core we are going to use for the
VNF before we generate the MultiPortConfig, so these
methods are incorrect.
We may need some more advanced method to validate
the vCPU topology, but that will probably happen
in the MultiPortConfig
Change-Id: Ifee7ae4589ce0fce67771fb8d8b8dce0a0f06409
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
Removed the abs function which can potentially mask
negative dropped packets.
Dropped packets in Prox workload VNF = rx_packets - tx_packets
Change-Id: I510a351e899cdf9a1f366d632b9f0528b1d9dcce
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
|
|
Change-Id: Ie770ca69ebdc66589ed6ca5c25bfc9a75afb8938
Signed-off-by: Deepak S <deepak.s@linux.intel.com>
|
|
Change-Id: I1f457c9c24f2ca84dde61b64f58edaff8952670a
Signed-off-by: Edward MacGillivray <edward.s.macgillivray@intel.com>
|
|
Change-Id: I8674caa15c9fc32cfacb17f558da5fb31094877e
Signed-off-by: Deepak S <deepak.s@linux.intel.com>
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
JIRA: YARDSTICK-802
Addition of Prox vPE test case
- The tests supports BM, Openstack Heat
- Supports 4 ports
- Grafana dashboards included
- Added support for parameters.lua
for prox additional files
- Unit tests for code coverage
Change-Id: I5cccb351dacba88a293ae4b8aba1f0a803d62e6d
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel MArtin Buckley <daniel.m.buckley@intel.com>
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
Change-Id: I27bcc41c855f34fb1fd0332fc24e7bf0b2af4ec2
Signed-off-by: Deepak S <deepak.s@linux.intel.com>
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
The PROX tests were hanging in the duration
runner.
These are fixes for various errors:
raise error in collect_kpi if VNF is down
move prox dpdk_rebind after collectd stop
fix dpdk nicbind rebind to group by drivers
prox: raise error in collect_kpi if the VNF is down
prox: add VNF_TYPE for consistency
sample_vnf: debug and fix kill_vnf
pkill is not matching some executable names,
add some debug process dumps and try switching
back to killall until we can find the issue
sample_vnf: add default timeout, so we can override
default 3600 SSH timeout
collect_kpi is the point at which we check
the VNFs and TGs for failures or exits
queues are the problem make sure we aren't silently blocking on
non-empty queues by canceling join thread in subprocess
fixup duration runner to close queues
and other attempt to stop duration runner
from hanging
VnfdHelper: memoize port_num
resource: fail if ssh can't connect
at the end of 3600 second test our ssh connection
is dead, so we can't actually stop collectd
unless we reconnect
fix stop() logic to ignore ssh errors
Change-Id: I6c8e682a80cb9d00362e2fef4a46df080f304e55
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
set TRex -c option for threads per port based on
hardware number of queues.
We can't auto-detect number of queues and we can't
use more than one thread per core on systems with single-queue
interfaces, so move the option to the config file
options:
tg_0:
queues_per_port: 2
also enable trex debug by removing >/dev/null redirection
options:
tg_0:
trex_server_debug: true
Change-Id: I46da187849282bf28f4ef5b333e1ae890e202768
Signed-off-by: Deepak S <deepak.s@linux.intel.com>
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
|
|
JIRA: YARDSTICK-802
Added Prox BNG and BNG-QoS Test
- The tests supports BM, Openstack Heat
- Supports 4 ports
- Test added for BNG traffic profile
- Fixed the Prox heat test cases with
proper upstream and downstream links
- Grafana Dashboard for BNG & BNG-QoS added
- Increased the test Duration to 300
TODO:
- Test does not Terminate correctly
Update:
Added new helper class for run_test: Genric, MPLS
and BNG tests.
Change-Id: Ib40811bedb45a3c3030643943f32679a4044e076
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Edward MacGillivray <edward.s.macgillivray@intel.com>
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
|
|
We have the collectd.conf inside the python package
so instead of copying it from various places,
write the template directly to the remote system.
collectd: read collect.conf template with pkgresources
read the collectd.conf file as a string directly
and upload without creating temp file
use Jinja2 template, disable failing plugins
use proper Jinja2 template, disable the plugins that
were failing to load and blocking startup
add support for per-testcase collectd.conf config
using YAML
add support for custom interval, default is 25 seconds
Change-Id: Id904f7b7c9f41a9dd7adf5dfa06c064d65c25d2d
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
Change-Id: Ia934128777d2839f6d2b940857c266fc3e2bd4a1
Signed-off-by: Deepak S <deepak.s@linux.intel.com>
|
|
Change-Id: Ia9722604b7c8ae23e784e780f113d012de544d4b
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
we were using raw sort index of the interfaces to
set the MAC address, but we should be using the
traffic id from the static JSON instead.
Change-Id: I13284db04abb3eaf8c9826974a9e5aa1c37b3891
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
Change-Id: I26957977e6dcd0392078a543a6907a550711c702
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
|
|
The problem is that we share the same ProxResourceHelper
for both VNF and TG.
For VNF we want to talk to resource.py and get collectd KPIs.
For TG we need to read from the queue the TG calculated KPIs and
we also want collectd KPIs.
workaround is to use a different method name collect_collectd_kpi
for VNFs
Change-Id: Icc2132758e37ce210f5600a0cd433077930208e5
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
the prox files were being found correctly.
if we use find_relative_file they will lookup
relative to the task_path
Change-Id: Ifde5d07df5ccfbfeba015b2f43bd8b53e89a00b7
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
we generate the prox_config_dict in the _run Process,
but we also need it in the _traffic_runner Process to
get core info.
use a queue to pass the config list between the processes
enable collect_kpi
Change-Id: Ibaf41d606e559a87addf43d6ddaed206dbd2d20c
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Edward MacGillivray <edward.s.macgillivray@intel.com>
|
|
Also rename private to uplink, public to downlink
for scale-out template we need to count from 0
so we can use range() without +1/-1 errors
vnf_0, vnf_1
tg_0, tg_1
also fix Ixia defaults
Change-Id: I6aecfbb95f99af20f012a9df19c19be77d1b5b77
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Edward MacGillivray <edward.s.macgillivray@intel.com>
|
|
Change-Id: Icf7a01a053495e6d96bd664d6ceda8964fa437eb
Signed-off-by: Deepak S <deepak.s@linux.intel.com>
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
Add a new PortPair class to resolve the
topology into list of public and private ports.
Before we were calculating public/private in multiple
locations and using different conventions.
In addition for all the DPDK test we need to use the DPDK
port number and no rely on interface ordering or interface naming
conventions.
We used to use xe0 -> 0, xe1 -> 1, etc. This is not the DPDK port
number.
Use the new dpdknicbind_helper class to parse the output of
dpdk-devbind.py to find the actual DPDK port number at runtime.
We then use this DPDK port number to correctly calculate the
port_mask_hex.
The port mask maps the DPDK port num (PMD ID) to the LINK ID
used in the pipeline config
We also need to make sure we only use the interfaces matched to the
topology and not use all the interfaces, because in some cases we will
have unused interfaces. In particular TRex always requires an even
number of interfaces, so for single port TRex tests we have to create
the second port and not use it.
Thus we had to modify the traffic generator stats code to only dump
stats for used ports and no unused ports.
Ixia was using interface ordering to map to Ixia ports, instead we use
the dpdk_port_num which must be hardcoded for Ixia.
Renamed traffic_profile.execute to traffic_profile.execute_traffic so
we can trace the code easier.
We pass the port used by the traffic profile to generate_samples so we
don't get stats for unused ports.
Fixed up vPE config creation and bring up issues.
Fixed up CGNAPT and UDP_Replay to work correctly.
Tested with 4-port scale-out
Change-Id: I2e4f328bff2904108081e92a4bf712333fa73869
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Edward MacGillivray <edward.s.macgillivray@intel.com>
|
|
|
|
Previsouly we added all servers to every network
in Heat in a full mesh.
To more closely replicate test topology and to limit
then number of ports we need to all each server
to specify which ports should be connected to each network.
This should also allow for some kind of multiport setup.
Add optional network_ports dict to each server with network to port_list
mapping
match inteface based on port name or vld_id
replace vld_id matching with network name matching, since network_name == vld_id
Change-Id: I5de46b8f673949e3c17d8df6fa96f055c43886ce
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
Change-Id: I1c529eeb0ef47752ed15e3e7941f57f7793ebfd4
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
Line parser handles comments, keys and values
and makes exceptions.
Change-Id: I5cd3612ffd8cb08b14051bd0ef4b757c310f77bd
Signed-off-by: Edward MacGillivray <edward.s.macgillivray@intel.com>
|
|
|
|
Change-Id: I346f6064c39cb5662c2b17ca0f520addbe5eae4c
Signed-off-by: Deepak S <deepak.s@linux.intel.com>
|
|
- Added a PROTOCOL_MAP to map the protocol names to codes -- the scapy
requires the code, it fails if the proto is set e.g. to 'udp'
- ip addresses must be str, not unicode -- explicit conversion to str
added
- removed unittest for setup_vnf_environment in test_tg_trex.py as
it is the same function as already tested in test_sample_vnf.py
- traffic_profile refactored -- code repetition decreased, unittest
adapted
Known issues:
- there is a an attempt to stop already stopped trex. It fires an
exception that stop command is issued on the disconnected client.
Change-Id: I87e9029630f48b30e8f5b4f9d88ab3b25fd65f03
Signed-off-by: Martin Banszel <martinx.banszel@intel.com>
|
|
|
|
Change-Id: If679333dc1cb9e041a332fb374c55f72eaab1b28
Signed-off-by: Deepak S <deepak.s@linux.intel.com>
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
JIRA: YARDSTICK-802
Addition of Prox L2Fwd, MPLS test cases for BM and Heat.
updates:
Most of tg_prox and prox_vnf were absorbed into the base classes.
delete most of ProxDpdkVnfSetupEnvHelper, it is handled by DpdkVnfSetupEnvHelper baseclass
use standard _build_pipeline_kwargs methods
don't use terminate() use baseclass version
add new method kill_vnf that runs pkill -x
replace resource_helper.execute() with vnf_execture for dumping stats
In order to share code between tg_prox and vnf_prox
refactor to have tg_prox hold and wrap a ProxApproxVnf instance and call
methods on that class. Do this instead of multiple-inheritance.
Implement ProxApproxVnf.terminate() using prox socket command
based exit, (stop_all, quit, force_quit).
vnf_execute calls resource_helper.execute() which calls
socket methods on the sut object.
Since tg_prox wraps the VNF object, we can call
terminate on the VNF object and it should work correctly.
move prox config generation to parent process
we need to get core number info from config file
inside the TG processes, so we need to generate
the config in the parent process so the data is
copied to the child during the fork.
moved more config file methods to the setup_helper class.
we run force_quit after quit, so the socket should already be closed
this will trigger socket error, so add _ignore_errors option for
vnf_execute to ignore socket errors
Fixed the terminate issue. Added MPLS tests.
Added TG Stats in_packet/out_packet
Fixed compile (pep8) issues
Fixed MPLS TG port stats, in/out packets
Added Grafana dashboards for L2FWD and MPLS
Traffic profiles modified for tolerated loss and
precision as per DATS tests.
Added unit test case for Mpls
Single port test stats collection support.
Change-Id: Idd9493f597c668a3bb7d90e167e6a418546106e8
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
8192 * 2048kB = 16GB
Change-Id: I82bf420794e5174e88cfaea08b9fab0d77c2be7f
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
- added common method to get relative paths
- added 'Ixia' APP_NAME
Change-Id: I7966798bab71af66d3efbeb1e13b07e8fbb41e88
Signed-off-by: Maciej Skrocki <maciej.skrocki@intel.com>
|
|
return super result
Change-Id: I723a37281da15c1887ae1b3cf91d7e957b1924d1
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
- path should be defined via TREX_CLIENT_LIB environmental variable e.g. TREX_CLIENT_LIB=/opt/trex_client/stl
- refactored unit tests
Change-Id: I18767e48daf774432c010f1b88d18a4f0ee4e156
Signed-off-by: Maciej Skrocki <maciej.skrocki@intel.com>
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
|
|
There are multiple issues wiht YAML loading.
1. Jinja2 renders None values as a string 'None'. This is not valid YAML
we need to render None values to '~' or 'null' which is the native YAML
None value.
2. Jinja2 renders dict and lists that contain unicode with
u'foo' values. This is not value YAML syntax.
Because we are serializing dict and lists into YAML, we
need to encode them as valid YAML. We can override Jinja2 finalize to
use yaml.dump to dump inline YAML.
We use yaml.safe_dump(elem, default_flow_style=True).replace('\n', '')
to generate valid single-line YAML dict and list values.
But this problem highlights the general difficulties with templating and
loading files.
We could avoid this Python->Jinja2->YAML->Python issue by directly
injecting the list or dict after the YAML is loaded.
I'm not sure of the real utility of these templates.
3. On Python 2 YAML loader is rendering all strings
as unicode. This does not work for Trex because Trex is broken
and badly coded. Trex does type checking against str() which
is different for Python 2 and Python 3.
The default YAML loader will return native string types, str() or unicode()
for Python 2 and Python 3 respectively.
The bad Trex codes is in convert_val:
https://github.com/cisco-system-traffic-generator/trex-core/blob/master/scripts/automation/trex_control_plane/stl/trex_stl_lib/trex_stl_packet_builder_scapy.py#L674
def convert_val (val):
if is_integer(val):
return val
if type(val) == str:
return ipv4_str_to_num (is_valid_ipv4(val))
raise CTRexPacketBuildException(-11,("init val invalid %s ") % val );
This code is doing type(val) == str. This is bad and broken.
We can't fix Trex, so we have to render all strings as native str() types
The bug here was that the Heat template loader template_format.py
was overriding the global YAML loader to always return unicode.
We don't want this global override.
To fix this we have to use local subclasses of the yaml.SafeLoader
class.
But in order to dynamically subclass from CSafeLoader or SafeLoader
we have to use the type() builtin to define a new class at runtime.
Once we have new classes defined, we can safely isolate different
YAML constructors and return unicode or not depending on the case.
To be consistent we implement a new yaml_loader.py module to centralize
all non-Heat template yaml loading to ensure correct uncode/str
conversion
Change-Id: Iebf9cf78fbda390977c390436b0869e7bbf503eb
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Deepak S <deepak.s@linux.intel.com>
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|