Age | Commit message (Collapse) | Author | Files | Lines |
|
We have the collectd.conf inside the python package
so instead of copying it from various places,
write the template directly to the remote system.
collectd: read collect.conf template with pkgresources
read the collectd.conf file as a string directly
and upload without creating temp file
use Jinja2 template, disable failing plugins
use proper Jinja2 template, disable the plugins that
were failing to load and blocking startup
add support for per-testcase collectd.conf config
using YAML
add support for custom interval, default is 25 seconds
Change-Id: Id904f7b7c9f41a9dd7adf5dfa06c064d65c25d2d
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
Change-Id: Ic8aa130f3cdc7bd8dec39d06a6b824340bf658b2
Signed-off-by: Deepak S <deepak.s@linux.intel.com>
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
JIRA: YARDSTICK-816
Change-Id: Ib7eb411b940775915c6c9f87ac5cdc9825069467
Signed-off-by: rexlee8776 <limingjiang@huawei.com>
|
|
Change-Id: Ia934128777d2839f6d2b940857c266fc3e2bd4a1
Signed-off-by: Deepak S <deepak.s@linux.intel.com>
|
|
Change-Id: Ic384c4603e7482f150fd4c4d2d6a6448a45ddb9e
Signed-off-by: Trevor Tao <trevor.tao@arm.com>
|
|
Kubernetes running node when creating containers for
Kubernetes context
For example, a yaml file may looks like:
servers:
host:
image: xxx
command: /bin/bash
nodeSelector:
xxx: yyy
Synchronously change the unit test for this function
Change-Id: If74c9dad9b1a70395bb79f34708a0fde04e7e650
Signed-off-by: Trevor Tao <trevor.tao@arm.com>
|
|
To avoid the following ERROR when netperf test between
2 different subnets:
ERROR netperf: send_omni: send_data failed: Network is unreachable
For detail, please see:
https://serverfault.com/questions/802320/netperf-iptables-\
masquerade-network-unreachable
Or:
https://stackoverflow.com/questions/11981480/error-in-running-\
netperf-udp-stream-over-openvpn
Change-Id: I62b202844861440deaf3bf0f65b41561bd87ae87
Signed-off-by: Trevor Tao <trevor.tao@arm.com>
|
|
JIRA: YARDSTICK-812
Currently grafana data source configuration is hardcoding .
It is a risk.
so I read it from yardstick.conf.
Change-Id: I8a9c8afbce6c4534fc43a0bfb5c56d67a8b59db0
Signed-off-by: chenjiankun <chenjiankun1@huawei.com>
|
|
|
|
management route ip is not common in most SUTs, so it should
be removed.
also huawei pod1 ipmi info should updated so this test case
can be added into ci later
Change-Id: I3a29c59c473ee7087d4d61753ffc955b061571fb
Signed-off-by: rexlee8776 <limingjiang@huawei.com>
(cherry picked from commit 8701e63e3daf508d8e3482f0a344554d17ff6e24)
|
|
JIRA: YARDSTICK-817
Since checkno.png and checkyes.png is not Apache-2 license based.
so we need to remove them.
Change-Id: I40dd303fb54a3736ca969ac1c186d2cd23408436
Signed-off-by: chenjiankun <chenjiankun1@huawei.com>
(cherry picked from commit da0163b7b7aaf3ede4e757a0b7d94a5ea99b1083)
|
|
Extract node IPs and IDs for each node having a controller or compute
role (name starting with the "cmp" or "ctl" prefix) and add them into
the $pod_yaml file, analogous to the previous implementation.
Since node IDs are expected to be unique and integers (condition that
does not match the salt node ID format), they are substituted with an
incremented index for each controller/compute node in the environment.
Change-Id: Id90626edc3f098bd96343336b2be179721dee5a1
Signed-off-by: Catalina Focsa <catalina.focsa@enea.com>
(cherry picked from commit 6892687967d2d5ac8db37dd67b3e52d9f775eda6)
|
|
JIRA: YARDSTICK-814
Test suite:
"opnfv_os-odl-fdio-ha_daily.yaml",
"opnfv_os-odl-dvr-noha_daily.yaml",
"opnfv_os-odl-sfc-noha_daily.yaml"
are missing in yardstick-apex-baremetal-daily-euphrates job.
We need to create them.
Change-Id: I6d8bbeb17cd887776f1f3b401ec80523ea90d3c1
Signed-off-by: chenjiankun <chenjiankun1@huawei.com>
(cherry picked from commit 6ecb9a6d50345277645633b1bed4d255dc434222)
|
|
JIRA: YARDSTICK-785
Change-Id: Ib37498e8df6a520f1d03256b73346fcedab3a177
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
(cherry picked from commit 9ea225f671b774c6e373dbaab146d68cac16194e)
|
|
Change-Id: Ia9722604b7c8ae23e784e780f113d012de544d4b
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
JIRA: YARDSTICK-803
Currently kubernetes test case can only run in master node.
We need to support it run in jump server.
So I add service and use nodePort type.
Then we can login the pod using nodePort.
Change-Id: Ia7900d263f1c5323f132435addec27ad10547ef9
Signed-off-by: chenjiankun <chenjiankun1@huawei.com>
|
|
if it use shutdown, it'll take several minutes to shutdown,
leads to the ipmi power on command fails
Change-Id: I74b61325cbcc3a6ec070d2fa103accf84f29b0fa
Signed-off-by: root <limingjiang@huawei.com>
|
|
We seen cases where grafana container bring-up code would fail,
because of too quick access to the http api. Added 10sec timeout
for the first query of the API.
Change-Id: Ifc95a626d0ab5552a1f26fb167fc3f65791392d7
Signed-off-by: Maciej Skrocki <maciej.skrocki@intel.com>
|
|
we were using raw sort index of the interfaces to
set the MAC address, but we should be using the
traffic id from the static JSON instead.
Change-Id: I13284db04abb3eaf8c9826974a9e5aa1c37b3891
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
Since we increased the images size the 4G is not sufficient anymore.
Change-Id: Iae25cf4cfb7a6cc69c8d28771c183a2342ac38d0
Signed-off-by: Maciej Skrocki <maciej.skrocki@intel.com>
|
|
ixia: pass ports to generate_samples
Change-Id: I90d12fa2ce8cd4d1c2a18bdcf70027f6d9e3f77f
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
Change-Id: I2f700fbb169d02d126fe7ea22721bebf127c1206
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
Change-Id: I26957977e6dcd0392078a543a6907a550711c702
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
Change-Id: Ib429ba24d2b7287b6ec4e749386da0e1242d6a20
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
and re-create the container.
Change-Id: I21204ddf97e2cccc2d5a762f5d910068bda1a948
Signed-off-by: Maciej Skrocki <maciej.skrocki@intel.com>
|
|
The problem is that we share the same ProxResourceHelper
for both VNF and TG.
For VNF we want to talk to resource.py and get collectd KPIs.
For TG we need to read from the queue the TG calculated KPIs and
we also want collectd KPIs.
workaround is to use a different method name collect_collectd_kpi
for VNFs
Change-Id: Icc2132758e37ce210f5600a0cd433077930208e5
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
JIRA: YARDSTICK-802
Addition of PROX L2FWD_Multiflow, ACL, Load Balancing plus
grafana dashboards
Supports 2 and 4 port Baremetal & Heat
Change-Id: I1f3990d5451de265ee3901302569c355ece3b146
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
|
|
the prox files were being found correctly.
if we use find_relative_file they will lookup
relative to the task_path
Change-Id: Ifde5d07df5ccfbfeba015b2f43bd8b53e89a00b7
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
Change-Id: I723477edf810a220816a2e67aa80f7f144efb3a6
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
we generate the prox_config_dict in the _run Process,
but we also need it in the _traffic_runner Process to
get core info.
use a queue to pass the config list between the processes
enable collect_kpi
Change-Id: Ibaf41d606e559a87addf43d6ddaed206dbd2d20c
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Edward MacGillivray <edward.s.macgillivray@intel.com>
|
|
- we need to make sure we have lsof installed
- we need to update cache inside the image, because we are unable to install some of them
Change-Id: Ic555489779e9096540001cb9c62ea2ab25c1ae90
Signed-off-by: Maciej Skrocki <maciej.skrocki@intel.com>
|
|
Change-Id: I85afff4582bf538fcd0be5b4db1405a4da2573f9
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
Change-Id: I92146411707a9ec29864d164dbd63b96d05bffe0
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
Instead of using a key_filename for Heat, we can
read the key as a string directly using pkg_resources.resource_string()
This will enable us to save Heat stacks as pod.yaml, because
we can embedded the key into the pod.yaml directly.
Change-Id: I16baaba17dab845ee0846f97678733bae33cb463
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Edward MacGillivray <edward.s.macgillivray@intel.com>
|
|
We want to generate pod.yaml from Heat contexts so we can
re-use the context without destroying it.
But we don't have node role information and it doesn't
make sense in this case, so make the role optional.
Since we changed Heat to use pkey instead of key_filename,
we can embed the pkey into the pod.yaml, but we have
to make sure to convert the pkey to string, in case
it is a RSAKey object
Change-Id: Ibefcfbd8236e68013a704c39964cb870da825da8
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
JIRA: YARDSTICK-810
Currently host, target is in scenario, but as a input, we prefer it in
scenario['options'].
So I add support for under scenario['options']['server_name']
If we write host in scenario['options']['server_name'], the host ip info
will be written in context.
Change-Id: I90df20467ef5da772d22e9f272a2cac250f822e0
Signed-off-by: chenjiankun <chenjiankun1@huawei.com>
|
|
JIRA: YARDSTICK-785
Currently if one test case failed, we will log error.
But if one case success, we do not have any tips.
We need log success when one case success.
Change-Id: I0f41ac55f2569f44b787133e3f2594a5c5547f4a
Signed-off-by: chenjiankun <chenjiankun1@huawei.com>
|
|
When compiling Trex, DPDK, collectd and all the SampleVNFs
we use more than the 2.2GB size of the original Ubuntu cloud image.
Accordingly we need to resize the image.
If we were not inside a docker container we would use virt-resize
to automatically handle all the cases, but virt-resize launches qemu.
Instead we can use qemu-img to add extra space, then
luckily we can use parted to resize the partition and finally
resize2fs to resize the filesystem.
This limits us to only ext3/4 images, but if we need to
we could add support for other filesystems by checking
file system type.
Change-Id: Iac84b8e6967af5be64c280a7b1eaaf09f5d6b3aa
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
For now just copied and renamed opnfv_os-nosdn-nofeature-ha_daily.yaml
Change-Id: Idbd37a3e21220aa407d053157da71b449bad15ee
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
Also rename private to uplink, public to downlink
for scale-out template we need to count from 0
so we can use range() without +1/-1 errors
vnf_0, vnf_1
tg_0, tg_1
also fix Ixia defaults
Change-Id: I6aecfbb95f99af20f012a9df19c19be77d1b5b77
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Edward MacGillivray <edward.s.macgillivray@intel.com>
|
|
we don't want use external DNS requests during unittest
Change-Id: I5ed67b700ef1dab4b650ae5071a3cf641a17ae4c
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
we get vld_id from the topology, we
don't need it in Heat context
Change-Id: I42c2309dda919e5b2026065dda851555df76ba57
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
Change-Id: Icf7a01a053495e6d96bd664d6ceda8964fa437eb
Signed-off-by: Deepak S <deepak.s@linux.intel.com>
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
Change-Id: I0b25e704b29fc68678eaa29d9e1d1eb04ee94e3e
Signed-off-by: Deepak S <deepak.s@linux.intel.com>
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
Change-Id: I80501ab3662a58930939d849f0bde0e810154a39
Signed-off-by: Deepak S <deepak.s@linux.intel.com>
|
|
Change-Id: I6cf8675c83fc081dd22ae7896e63ff7725ed3c13
Signed-off-by: Deepak S <deepak.s@linux.intel.com>
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
Change-Id: I664437d598db9f9dcc7036e306b8a4edc40287cf
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
Also sends a new line to the VNF when waiting for prompt.
Change-Id: Ib8641093974cd6713594aac9b418595ad5268e87
Signed-off-by: Martin Banszel <martinx.banszel@intel.com>
|
|
we assume the time it takes to start multiple
instances is proportional to the number of instances
so we scale the timeout based on the number of instances.
Change-Id: I6901890d3f184ac4e38e1d6823b96c291579e04a
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
|
|
Add a new PortPair class to resolve the
topology into list of public and private ports.
Before we were calculating public/private in multiple
locations and using different conventions.
In addition for all the DPDK test we need to use the DPDK
port number and no rely on interface ordering or interface naming
conventions.
We used to use xe0 -> 0, xe1 -> 1, etc. This is not the DPDK port
number.
Use the new dpdknicbind_helper class to parse the output of
dpdk-devbind.py to find the actual DPDK port number at runtime.
We then use this DPDK port number to correctly calculate the
port_mask_hex.
The port mask maps the DPDK port num (PMD ID) to the LINK ID
used in the pipeline config
We also need to make sure we only use the interfaces matched to the
topology and not use all the interfaces, because in some cases we will
have unused interfaces. In particular TRex always requires an even
number of interfaces, so for single port TRex tests we have to create
the second port and not use it.
Thus we had to modify the traffic generator stats code to only dump
stats for used ports and no unused ports.
Ixia was using interface ordering to map to Ixia ports, instead we use
the dpdk_port_num which must be hardcoded for Ixia.
Renamed traffic_profile.execute to traffic_profile.execute_traffic so
we can trace the code easier.
We pass the port used by the traffic profile to generate_samples so we
don't get stats for unused ports.
Fixed up vPE config creation and bring up issues.
Fixed up CGNAPT and UDP_Replay to work correctly.
Tested with 4-port scale-out
Change-Id: I2e4f328bff2904108081e92a4bf712333fa73869
Signed-off-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Edward MacGillivray <edward.s.macgillivray@intel.com>
|