Age | Commit message (Collapse) | Author | Files | Lines |
|
Change-Id: I7f6072121c4d88192c828e2e3cf2beb5d6c22c37
Signed-off-by: jenkins-ci <jenkins-opnfv-ci@opnfv.org>
Signed-off-by: Trevor Bramwell <tbramwell@linuxfoundation.org>
|
|
Change-Id: I37c7b8b6e4bd19ef94b9b42fe2e5e89cc3e2da21
Signed-off-by: Tim Rozet <trozet@redhat.com>
|
|
Download was not properly being skipped on upstream scenarios because
the scenario was not being detected and was set as "gate".
Change-Id: I38533ad8140be48726aa2cb1c106d7ef6ca9afd5
Signed-off-by: Tim Rozet <trozet@redhat.com>
|
|
Wrapping the script into a function will make it importable from other
python code. Calling the file directly will still work as it did before.
Change-Id: I8336d34b05687fa650ce1c123bb37fa311ce2978
Signed-off-by: Trevor Bramwell <tbramwell@linuxfoundation.org>
|
|
|
|
Scenario names with 'upstream' in them will not download any artifacts
and deploy from upstream. For now, this even applies to the apex
python RPM for daily deployments. We will only use git repo for
daily until after Fraser.
Change-Id: I0da16dfde117ba6c1e7597294d8e4afc8501dd53
Signed-off-by: Tim Rozet <trozet@redhat.com>
|
|
1. Run test suite ovp.1.0.0 on Apex with even 'BUILD_NUMBER'
for scenario nosdn-nofeature and bgpvpn.
2. Run test suite proposed_tests on Apex with odd 'BUILD_NUMBER'
for scenario nosdn-nofeature and bgpvpn.
JIRA: DOVETAIL-611
Change-Id: I5d4a86242d633eb83ddb0939dff5cf617c971c3b
Signed-off-by: xudan <xudan16@huawei.com>
|
|
fingers crossed
Change-Id: I220a36ec8a6a0d95e847a5672c4d8e5c0d34c5ac
Signed-off-by: agardner <agardner@linuxfoundation.org>
|
|
Remove and then re-add to see if we can get jjb merge to work
Change-Id: Iff380b38bbc5a69e2850cd91a99267b6d5b1128f
Signed-off-by: agardner <agardner@linuxfoundation.org>
|
|
Change-Id: I88f33b7dcdf8f4d0a3aa3f8d46a07f10c62e6ae9
Signed-off-by: agardner <agardner@linuxfoundation.org>
|
|
We were missing concurrent on this job which was preventing parallel
execution.
Change-Id: I4d1ea62aef2a321220799cebee008f494490886c
Signed-off-by: Tim Rozet <trozet@redhat.com>
|
|
Remove passing node params and enable global queue scanning. Hopefully
this will allow 2 apex-virtual jobs to run at the same time (1 per each
slave).
Change-Id: I310dbc477e267c302d50599bab2a933ce988dba7
Signed-off-by: Tim Rozet <trozet@redhat.com>
|
|
Scenario list for Fraser:
https://wiki.opnfv.org/display/SWREL/Fraser+Scenario+Status
Change-Id: I083dc5b0b9cea9f91d7a9568a05df5865aeafa05
Signed-off-by: Tim Rozet <trozet@redhat.com>
|
|
Changes Include:
- Remove defining 'node:' per job template and use slave-params
defaults
- Gate job was using daily/build slave, when it should be using virtual
slave
Change-Id: Iec2321801daef1b1fa40724a244bf2f6edf36c6e
Signed-off-by: Tim Rozet <trozet@redhat.com>
|
|
apex-verify job calls apex-virtual to deploy and run functest. Apex has
2 virtual slaves attached to the apex-virtual-master label. When 2
verify jobs are triggered at the same time, apex-verify job is scheduled
on each slave correctly. However, when the mulitjob triggers
apex-virtual jobs, it only schedules both of them to a single slave.
This happens even though apex-virtual job has the same slave label and
node parameters are not passed from verify job. This patch changes the
node to be passed to apex-virtual from apex-verify. That way 2 nodes
will be scheduled on, but this is still not ideal scheduling as more
than 1 verify can run per node (but not more than one virtual job).
Change-Id: I155351c9037f70df2c5dba11bb5592423849e760
Signed-off-by: Tim Rozet <trozet@redhat.com>
|
|
GERRIT_REFSPEC is always passed by the gerrit-trigger plugin when a job
is triggered by Gerrit. Because it is not explicily defined, there is no
way to manually trigger jobs, as the git clone looks up the list of refs
by GERRIT_REFSPEC.
Being able to manually trigger jobs (with node parameters so they can be
restricted) is very helpful in debugging CI issues.
Change-Id: I8a1d9ea380902fc95f30482e5acb616347709ab1
Signed-off-by: Trevor Bramwell <tbramwell@linuxfoundation.org>
|
|
Change-Id: I92b3c2fce51dad5e0e00b836a41af40f845e701e
Signed-off-by: Tim Rozet <trozet@redhat.com>
|
|
Change-Id: I7b10ac19a8844832886e6a54d065ee79dde026d0
Signed-off-by: Tim Rozet <trozet@redhat.com>
|
|
We were missing triggering on the ci/ path in our Apex repo. This
contains some code and a file we use to trigger verify jobs for
dependent patches.
Change-Id: I54f2826f8a16a1d0219d6ecc6ef8d257840b6399
Signed-off-by: Tim Rozet <trozet@redhat.com>
|
|
Since only APEX has the bgpvpn scenario in Euphrates, enable run dovetail
on Apex euphrates scenario os-odl-bgpvpn-ha to test sdnvpn test cases.
JIRA: DOVETAIL-568
Change-Id: Ic7c880a5ef911fac17807e19484f937bdaa53e21
Signed-off-by: xudan <xudan16@huawei.com>
|
|
Change-Id: I2f6e54badddf234fb781adc49b8395ac0144da06
Signed-off-by: Tim Rozet <trozet@redhat.com>
|
|
These scenarios will not be part of 5.1 release.
Change-Id: Ied91df7379705414850cda504842ecef2b3c7e0b
Signed-off-by: Tim Rozet <trozet@redhat.com>
|
|
This value is usually passed by parent jobs (daily, verify, etc), but
exposing it as parameter should allow us to build from JJB GUI.
Change-Id: I294fbcd200ff5d8bbfca77681296c6e59d7f0063
Signed-off-by: Tim Rozet <trozet@redhat.com>
|
|
Issue where apex-virtual-master label node was being passed from verify
job to deploy/functest. We allow multiple verify jobs to kick off on
virtual slaves, but then passing the host down to the deploy/functest
jobs causes the deploy/functest only run 1 job at a time on the same
node, rather than running 2 jobs at a time, one on each slave.
Change-Id: I1648eb6b84f17a2b08db4d161effe977c7952d63
Signed-off-by: Tim Rozet <trozet@redhat.com>
|
|
We use apex-virtual-master label for slaves that will deploy and run
functest. Build and unit test will not use this label and will use
apex-build-master.
Change-Id: Ibf266bd37813ea8ef38fc8060f73f83462275cfd
Signed-off-by: Tim Rozet <trozet@redhat.com>
|
|
JIRA: RELENG-287
Change-Id: Ie50fdddc47b47764d3e3064904f19015d5d39341
Signed-off-by: Trevor Bramwell <tbramwell@linuxfoundation.org>
|
|
Fixes some final yamllint issues introduced by recent patchsets in apex
and armband.
JIRA: RELENG-254
Change-Id: I26b45d737f06c215413e29c92031d14e23967506
Signed-off-by: Trevor Bramwell <tbramwell@linuxfoundation.org>
|
|
Dovetail danube jobs don't need to be triggerred every day now.
Disable the timed trigger.
If need, they can be triggered manually.
Change-Id: I2f114cd17fcd27d0e34be0824be3fc0d072dbae9
Signed-off-by: xudan <xudan16@huawei.com>
|
|
JIRA: RELENG-254
Change-Id: I354d7064c560d4b23e361d556b7fe269d7fb5d26
Signed-off-by: Trevor Bramwell <tbramwell@linuxfoundation.org>
|
|
Change-Id: I1e4237fd9716e92eec7633006c54240284f312c6
Signed-off-by: Ilia Abashin <abashinos@gmail.com>
|
|
Change-Id: I1d01d9d4a72946b4998437972ae12083675e7e79
Signed-off-by: Tim Rozet <trozet@redhat.com>
|
|
Even with a timed trigger that shouldn't execute more than once a year,
the apex-daily-master job is still triggering every day or so. Using an
explicit disable to disable the job.
Change-Id: I3b014c0d0899dba617fcb7cfee17ca758b291f9f
Signed-off-by: Tim Rozet <trozet@redhat.com>
|
|
There is a bug where if there are multiple builds queued and daily build
completes, the following iso verify job will try to use the workspace of
the completed daily build to get the iso file. However, if another
build job has already started it may clean and overwrite the workspace
of the daily build job, and the iso verify job will fail because the
file is now gone. This makes the build job copy the iso to a tmp
directory for apex iso verify to consume.
This should be safe since only one daily can run at a time on the host
and daily jobs build and iso verify have to always execute on the same
node.
Change-Id: Ie8e32c4abefbc311e505688d6da2b26ae08ed98f
Signed-off-by: Tim Rozet <trozet@redhat.com>
|
|
Updated all the jobs that use functest-suite.
For single test cases, we need to know to which docker image
the test belongs to.
Change-Id: If7db231ccc891233423f2f2bc3aa5b95d048b30a
Signed-off-by: Jose Lausuch <jalausuch@suse.com>
|
|
The old time is "0 12 * * *", need to set it to "0 1 * * *".
Change-Id: Id568dcb638202612ef8d7a9464d71952a95bffe8
Signed-off-by: xudan <xudan16@huawei.com>
|
|
Previously we had apex-verify-master running multiple instances on
virtual-slaves. Apex-verify-master would kick off a build on our build
server, and then apex-verify-master would execute deploy multijob and
then functest multijob. However we found a bug where jenkins build
blocker would see that a deploy finished on the virtual slave, and then
execute functest multijob as well as deploy multijob (for the next
verify job) at the same time.
This patch adds a parent job apex-virtual-{stream} which calls deploy
and functest multijob and will block correctly. It also renables having
more than 1 apex-verify job running at a time on the virtual slaves.
Change-Id: Id15b2415407fc3318f333e3dfc59076d04db4ffb
Signed-off-by: Tim Rozet <trozet@redhat.com>
|
|
Change-Id: Ia4523a185708a9d29243b522894b38fd1f047682
Signed-off-by: Tim Rozet <trozet@redhat.com>
|
|
|
|
We dont have other pods so no reason to make extra labels
Change-Id: Ib701ae25d6cd08035930773219f691c7dc1b156e
Signed-off-by: Tim Rozet <trozet@redhat.com>
|
|
Change-Id: I2f6dbe545c1c1adaa0a7020440f17f6f0cf37973
Signed-off-by: Tim Rozet <trozet@redhat.com>
|
|
|
|
Change-Id: I2f4a8c57bf056fcc266a0757b291309671ecc151
Signed-off-by: Peng Liu <pliu@redhat.com>
|
|
The tmp directory no longer holds large files, and removing this while
other jobs are running can cause build failures.
Change-Id: I504d06e2e114dd1be4fe3790fcefaf97c724552c
Signed-off-by: Tim Rozet <trozet@redhat.com>
|
|
Disables master daily. Just uses master labels for Euphrates jobs as
well.
Change-Id: I65b0eed528518c07d3ef4194a021004deabe2ed0
Signed-off-by: Tim Rozet <trozet@redhat.com>
|
|
|
|
Change-Id: I2b7ee35500da4523a6cb872f89225fad6dd8af7f
Signed-off-by: Tim Rozet <trozet@redhat.com>
|
|
Merged 'apex-daily-master' and 'apex-daily-danube' into
'apex-daily-{stream}'
Change-Id: I2b1e9e3dd0869b6a1f2b1b6415b364a2d9f151d2
Signed-off-by: Trevor Bramwell <tbramwell@linuxfoundation.org>
|
|
The last patch to add apex danube jobs on huawei-pod4 forget to add the
job-template to the jobs list.
Change-Id: I671faa2068bab517adc59ad8597e9c05330d528f
Signed-off-by: xudan <xudan16@huawei.com>
|
|
|
|
Run Dovetail proposed_test job against Apex Danube daily on
huawei_pod4.
Change-Id: I14f4f86caa2b1fb2802b5ea154edec47784209cc
Signed-off-by: Peng Liu <pliu@redhat.com>
|