Age | Commit message (Collapse) | Author | Files | Lines |
|
Change-Id: I189dd771f9985424694ca0164c6e42f117f12bf9
Signed-off-by: Peter Barabas <peter.barabas@ericsson.com>
|
|
As described in this bug: https://bugs.launchpad.net/fuel/+bug/1625518
json output of the task can be in different format: single dict or list
of dicts. During tests of https://gerrit.opnfv.org/gerrit/21807 only the
later one was visible, try to support both types of output.
Change-Id: I7d3e12270c8246b03bdc6c73d3be77a039df469f
Signed-off-by: Michal Skalski <mskalski@mirantis.com>
|
|
Use fuel2 for start deployment. Since it does not return progress
use deployment task to provide this information. Currently used
'deploy-changes' will behave the same:
https://bugs.launchpad.net/fuel/+bug/1565026
Try to handle situation when nodes temporary go offline. With
deploy-changes environment still was in 'new' state in this situtation
which causes timeouts from jenkins.
JIRA: FUEL-196
Change-Id: I6548a5ec807551388e845044c282b7af32eb9100
Signed-off-by: Michal Skalski <mskalski@mirantis.com>
|
|
Due the bug in code we did not apply network transformation to created
environments, but still Fuel base on chosen segmentation type and
networks to NICs assigment has been generated network schema itself.
Since we don't use custom network schema we can remove transformation
defintions from dea pod overrides files. However there is a need to
configure NIC properties in case of dpdk deployment.
JIRA: FUEL-192
Change-Id: Ib7dab4d61910ac8c44b6d91e0c486c9693034823
Signed-off-by: Michal Skalski <mskalski@mirantis.com>
|
|
In order to avoid fetching up non-cluster nodes
filter them out by option of id.
JIRA: FUEL-183
Signed-off-by: Michael Polenchuk <mpolenchuk@mirantis.com>
Change-Id: If0d0a1480d648167f1bcf726f0d6d345d2e00711
|
|
Fixes https://jira.opnfv.org/browse/FUEL-152
Change-Id: I444bf3aef54ffd53c53431e2795b11b10545f55f
Signed-off-by: Fedor Zhadaev <fzhadaev@mirantis.com>
|
|
The purpose of this patch is to collect all available Fuel snapshots- and
stack/node ldeployment logs for later off-line troubleshooting.
The intention is that Jenkins, or other deployment robots will be able to
collect all logs from the deployment and store it at some repository where
developers can fetch it and perform off-line post deployment trouble-shooting.
Following script arguments have been added:
CI Arg changes:
Added an argument to ci/deploy.sh:
-L [Deploy log path and file name], E.g.
-L ~/jenkins/deploy/deploy-888.log.tar.gz
This will create an tar gzip archive at the path and filename pointed out.
If -L is not specified, the log archive will be placed under the CI directory
with the following name convention: deploy-YYMMDD-HHMMSS.log.tar.gz
Fuel Internal deploy changes:
Added an argument to ci/deploy.py
-log [Deploy log path and file name], E.g.
-log ~/jenkins/deploy/deploy-888.log.tar.gz
This will create an tar gzip archive at the path and filename pointed out.
If -log is not specified, the log archive will be placed under the CI
directory with the following name convention:
deploy-YYMMDD-HHMMSS.log.tar.gz
READY TO MERGE!
VERIFIED!
Change-Id: Icb75d9d2e66bdd47f75dcca29071943444d5c823
Signed-off-by: Jonas Bjurel <jonas.bjurel@ericsson.com>
|
|
Modified network or interface configurations were not reflected in
the deployment config, resulting in faulty node configurations.
Change-Id: I4ca20702c0171e7995f2b4f46317557ec9d5beac
Signed-off-by: Peter Barabas <peter.barabas@ericsson.com>
|
|
During the automatic deployment, when the environment is ready to be
deployed, the deploy.py script will spawn a shell process that will
perform the command "fuel deploy-changes". The standard output of this
process is then piped to a "tee" process, which redirects the output
to the standard output of the shell process, and to a file named
cloud.log. The file is monitored by the deploy script to find out the
status of the deployment, and print it to the log file of the automatic
deployment script, including percentages for each node being
provisioned. However, the deploy script never consumes the standard
output of the shell process. If the shell process produces enough
output, its standard output buffer will fill up, thus making the tee
process block trying to write to its standard output, and the cloud.log
file will not be updated. At this point, the deploy process, which is
monitoring cloud.log, will not detect any progress in the deployment,
and eventually it will time out and assume the deployment failed,
although it might have finished fine after that.
The solution here is to remove the "tee" process from the shell command,
and instead redirect standard output to the cloud.log file.
Another solution would be to actually parse the standard output of the
shell command from the deploy script itself, but that would require a
bit more work, as reading a line at a time might block the script.
Finally, with this patch the cloud.log file won't be deleted unless the
shell process has already finished.
Change-Id: I03a77be42d220b1606e48fc4ca35e22d73a6e583
Signed-off-by: Josep Puigdemont <josep.puigdemont@enea.com>
|
|
The Fuel environment name may contain spaces, putting the name in quote
marks prevents the second and subsequent words from being interpreted
as other parameters by the fuel command.
The name could contain double quotes too, so this doesn't solve all
problems, but arguably the most common case.
Signed-off-by: Josep Puigdemont <josep.puigdemont@enea.com>
|
|
For development reason it is useable to
have an option so that everything is done
except the deploy of the openstack
environment.
Change-Id: I1f1b7f9c89ee8c9ceea96353e25a51eee53b955c
|
|
Some compones of openstack produce a lot of CPU load.
With this commit it is possible to
make more use of the Hypervisor where the virtual
nodes runs on.
Change-Id: Ide567dd0823c5526171c29073f2a36aa5f27d4b6
|
|
Change-Id: I6f3f35680c9f90f99148865edf8ba905ecbb6c30
Signed-off-by: Peter Barabas <peter.barabas@ericsson.com>
|
|
- Increase deployment timeout to 4h since some deplyments
takes more than 3h (KVM)
- Fixed build interference between OVSNFV and OVS-NSH where the
later removed the OVSNFV build result from release/opnfv.
A propper fix for SR2 is to have f_isoroot/Makefile remove the release
directory before build, and not have the plugins removing anythin in release
Change-Id: Ibca986554087d6a7f12ed8c7cc6fdd4919368ad2
Signed-off-by: Jonas Bjurel <jonas.bjurel@ericsson.com>
|
|
In Fuel 8.0 it is possible to install many version of the same plugin.
Because of that there is additonal structurce in plugin configuration.
Assumption is that we only use one version of the plugin.
Change-Id: I50d5bc32dd6dab6fe2541748dd8404d887e336e0
Signed-off-by: Michal Skalski <mskalski@mirantis.com>
|
|
Change-Id: Icd2feed7326772837c74f35688160d1eb0c25652
Signed-off-by: Peter Barabas <peter.barabas@ericsson.com>
|
|
NOT VERIFIED
DO NOT MERGE
Change-Id: Id5b6029d11bfcd394e6f84a7b73b8a17820561cf
Signed-off-by: Jonas Bjurel <jonas.bjurel@ericsson.com>
|
|
and deployment/test scenarios
READY TO MERGE!
Replaces: https://gerrit.opnfv.org/gerrit/#/c/3995/
Abstract
--------
This deployment framework relies on a configuration structure,
providing base installer configuration, per POD specific configuration,
plugin configuration, and deployment scenario configuration.
- The base installer configuration resembles the least common denominator
of all HW/POD environment and deployment scenarios (These configurations
are normally carried by the the installer projects in this case (fuel@OPNFV).
- Per POD specific configuration specifies POD unique parameters, the POD
parameter possible to alter is governed by the Fuel@OPNFV project.
- Plugin configuration - providing configuration of a specific plugin.
these configurations maintain there own namespace and are normally maintained
by collaborative projects building Fuel@OPNFV plugins
- Deployment scenario configuration - provides a high level, POD/HW environment
independent scenario configuration for a specific deployment. It defines what
features/plugins that shall be deployed - as well needed overrides of the
plugin config as well as the base installer-, POD/HW environment-
configurations. Objects allowed to override
is governed by the Fuel@OPNFV project.
Executing a deployment
----------------------
deploy.sh must be executed locally at the target lab/pod/jumpserver
A lab configuration structure must be provided - see the section below.
It is straight forward to execute a deployment task - as an example:
sudo deploy.sh -b file:///home/jenkins/config -l ericsson-1 -p pod-2
-s os_odl-l2_no-ha -i file:///home/jenkins/MyIso.iso
-b and -i arguments should be expressed in URI style. The resources can thus be
local or remote.
Feedback
--------
Please give feed-back before I'm going to far on a wrong tangent
Implemented scenarios so far:
-----------------------------
- os_ha
- os_no-ha
- os_odl-l3_ha
- os_odl-l3_no-ha
- os_odl-l2_ha
- os_odl-l2_no-ha
- os_onos_ha
- os_onos_no-ha
- os_kvm_ha
- os_kvm_no-ha
- os_ovs_ha
- os_ovs_no-ha
- os_kvm_ovs_ha
- os_kvm_ovs_no-ha
VERIFIED
READY TO MERGE
JIRA: FUEL-35
Change-Id: I94a9b477d8ed4ee8057c16d8f20fe543f7ecc20d
Signed-off-by: Jonas Bjurel <jonas.bjurel@ericsson.com>
|
|
nodes
Change-Id: Id43e74fd3ebd1bd0c62e2aa963793d6b072e3fcc
Signed-off-by: Szilard Cserey <szilard.cserey@ericsson.com>
|
|
Restructure of the directory layout due to move of Fuel into it's own repo
JIRA: FUEL-85
Change-Id: I3647e1992a508f29dce06a5d6c790725c527f6f5
Signed-off-by: Jonas Bjurel <jonas.bjurel@ericsson.com>
|