Age | Commit message (Collapse) | Author | Files | Lines |
|
JIRA: DAISY-42
Change-Id: I0fd709bb0dbee42cdc73076773cb635be6ba02cd
Signed-off-by: Alex Yang <yangyang1@zte.com.cn>
|
|
|
|
|
|
Rename all scenarios with "odl_l3" in their name in Euphrates to
just "odl".
Daisy will keep "odl_l3" and "odl_l2" (in code, not for user)
for further reference.
Change-Id: Ib762dd808d4f9467b0e6827b8bbed6d9df7e0e0e
Signed-off-by: Zhijiang Hu <hu.zhijiang@zte.com.cn>
|
|
JIRA: DAISY-42
JIRA: DAISY-56
In bare metal deployment, we can use PDF to get MAC addresses of nodes
https://gerrit.opnfv.org/gerrit/#/c/38387/.
Then we can use the MACs to help to distinguish the discovered nodes
and assign roles to them, like virtual deployment in the link
https://gerrit.opnfv.org/gerrit/#/c/38381/.
Change-Id: Ib0f1a60b8935f528a828f716ccc916b767cfa6f9
Signed-off-by: Alex Yang <yangyang1@zte.com.cn>
|
|
|
|
There are some error messages in virtual deployment, such as
"Domain not found" or "network is already active".
The script forgets to destroy the keep-alived network, and should
not destroy a non-existent VM or network.
Change-Id: I8d9dce9d70f732bd6942b293e407e1845d81fc39
Signed-off-by: Alex Yang <yangyang1@zte.com.cn>
|
|
|
|
Change-Id: I9764612171ef3bf2cdfc652420a2b162fcbfab43
Signed-off-by: zhouya <zhou.ya@zte.com.cn>
|
|
JIRA: DAISY-56
controller01 -> $name
Change-Id: Icd959ca55079a6ac0bfbd181ff134d7decfb89f5
Signed-off-by: Alex Yang <yangyang1@zte.com.cn>
|
|
JIRA: DAISY-56
Now the roles are assigned to nodes randomly, because the function
'add_hosts_interface' in tempest.py just uses zip to map the
host's name to the host.
libvirt_utils.py: get mac addresses from VM
environment.py: save the mac addresses
daisy_server.py: write a new deploy.yml which contains the mac addresses
and copy the file to daisy server
get_conf.py: get mac addresses from the new deploy.yml
tempest.py: assigned roles to nodes when the mac addresses matched
controller.xml: increase the RAM to make difference with computer nodes
deploy.sh: apply this change to bash script
Change-Id: Ia61b60d39d319c5d01e3505727fafc63a0585858
Signed-off-by: Alex Yang <yangyang1@zte.com.cn>
|
|
just move the code from get_para_from_deploy to get_conf file
no functionality change.
Change-Id: I86aa1325ff37cb2ae0784c9487e62e95cc23f644
Signed-off-by: zhouya <zhou.ya@zte.com.cn>
|
|
|
|
From [1], We got an error from docker: "No such container: daisy"
while issuing "docker rm -v -f daisy", this OK because we do
docker run --rm before this. So it is safe to add "|| true" after
"docker rm -v -f daisy".
[1] https://build.opnfv.org/ci/job/daisy-build-daily-master/500/console
Change-Id: I3d17595156f1b6181a84d9a03e2cd6ddff275eb3
Signed-off-by: Zhijiang Hu <hu.zhijiang@zte.com.cn>
|
|
Switch to upstream daisy's stable/ocata branch
Change-Id: I5ff0b0a28a8d2f76f0cb813af8f8241175bb6054
Signed-off-by: Zhijiang Hu <hu.zhijiang@zte.com.cn>
|
|
Enable HA scenario options due to support HA function in upsteam.
Change-Id: Ie1889afc1a149f171a9b324eb284fd650baed397
Signed-off-by: zhongjun <zhong.jun@zte.com.cn>
|
|
Change-Id: I0157bf8b6fa9be254c61bb384065f80107ab3dda
Signed-off-by: root <zhou.ya@zte.com.cn>
|
|
Project job deployment job uses os-nosdn-nofeature-ha currently,
so we need to treat os-nosdn-nofeature-ha as a valid scenario
to let the job continue to work.
Change-Id: Ib9311ada9b043b1f695f43edb51adbb3714d3356
Signed-off-by: Zhijiang Hu <hu.zhijiang@zte.com.cn>
|
|
Add the scenario argument valid check in deploy.sh.
Change-Id: Ifeeed3882b22ba379975c2356d761e0536c5c2c9
Signed-off-by: zhongjun <zhong.jun@zte.com.cn>
|
|
Change-Id: I759d864efa524c0d564b9d93aa480e155149adaa
Signed-off-by: zhouya <zhou.ya@zte.com.cn>
|
|
|
|
|
|
Change-Id: I0660c016e18491395c7253e5576f8fa1c8aa051e
Signed-off-by: Zhijiang Hu <hu.zhijiang@zte.com.cn>
|
|
Default is os-nosdn-nofeature-noha
Change-Id: I12e70552c426884269c2c7f1bfa05e1db5658bea
Signed-off-by: Zhijiang Hu <hu.zhijiang@zte.com.cn>
|
|
There are error messages "iptables: No chain/target/match by that name"
in baremetal deployment:
https://build.opnfv.org/ci/job/daisy-deploy-daily-master/298/console
Change-Id: I7e2940222fd0a99ee42823a08a285bdd93892fe6
Signed-off-by: Alex Yang <yangyang1@zte.com.cn>
|
|
Change-Id: I654a27042ff9807a502773d48df1c26d12302de7
Signed-off-by: zhouya <zhou.ya@zte.com.cn>
|
|
Change-Id: I179b5aab51958c0cd4653e65bbd74df5cfd7c53e
Signed-off-by: SerenaFeng <feng.xiaowei@zte.com.cn>
|
|
|
|
Change-Id: I3026c4dd83cc19246c19ab568005c4c0b37ff9d9
Signed-off-by: SerenaFeng <feng.xiaowei@zte.com.cn>
|
|
Change-Id: Ib80710c784d384ebc27eb0f51fcb4384017eecca
Signed-off-by: Alex Yang <yangyang1@zte.com.cn>
|
|
The upstream openstack/daisycloud-core supports ocata openstack now.
https://review.openstack.org/#/c/465410/
Change-Id: I14825c80cdd2297e5b0df3680f30fa5c32de3cc4
Signed-off-by: Alex Yang <yangyang1@zte.com.cn>
|
|
|
|
1) This mainly increase PACKETS_PER_BUFFER to 65536 to reduce the
frequency of TCP client acks.
2) Also kills TCP_BUFF_SIZE and define each buffer size in a
more intuitive way.
3) Free more unused memory to prevent being killed by oom-killer
after enlarged PACKETS_PER_BUFFER.
4) Increase client's select() timeout to 20 secs, since we encountered
timeout due to CPU busy in the same BM but with 20 VMs.
Tested this PS in a 10 VM node env, and it can multicast a 2.7G
file to 10 VMs in 6 minutes, while unicast needs 30+ minutes.
Change-Id: Iaf862fb1f1259cc770f720ccdd95dcc281aef262
Signed-off-by: Zhijiang Hu <hu.zhijiang@zte.com.cn>
|
|
the physical network device of external network is hardcode to
'physnet1',change it to obtain from network.yml configuration
file.
Change-Id: Id2e45ac488619db2247e73cc3fed5706db31d9e9
Signed-off-by: zhongjun <zhong.jun@zte.com.cn>
|
|
|
|
This print out error message of docker commands to find out the
reason of error like[1]
[1] https://build.opnfv.org/ci/job/daisy-build-daily-master/239/consoleFull
Change-Id: Ic1bd85d999dbe584764bc9a05d22579835e55516
Signed-off-by: Zhijiang Hu <hu.zhijiang@zte.com.cn>
|
|
1.add multicast flag in daisy.conf
2.sleep40 for trustme.sh execute ok
3.modify cpu mode of xml file to make it much more general
4.fix check_openstack_progress parameter error
Change-Id: Ic150698ede448b7651e95d129aeb7d97a8f34309
Signed-off-by: zhouya <zhou.ya@zte.com.cn>
|
|
|
|
Now the daisycloud-core is a big repo because there are some big files
in the git history. Use "--depth 1" to reduce the amount to download.
Change-Id: I8ce0dc6675d2239a126bcf558300f1ad45cd3fb3
Signed-off-by: Alex Yang <yangyang1@zte.com.cn>
|
|
put the kolla nova-compute.conf file update handle to prepare/
execute.py for unify the all relate kolla config in py way.
Change-Id: Iac585df97d2855038a83f9bfdadfb9e449660c9c
Signed-off-by: zhongjun <zhong.jun@zte.com.cn>
|
|
1. Move an log message to the correct location(now at line 331).
In baremetal deployment, the virtual-deploy message is printed.
2. Modify all the "===" in the log messages to the same number of "="
3. Remove the TODO list since the work finished
Change-Id: I34325c522036caf9d1aa58c9cbf30eb77843fdfc
Signed-off-by: Alex Yang <yangyang1@zte.com.cn>
|
|
|
|
the daisy install tmp workdir can be configurated as an input
argument in deploy.sh,and it also should be used in subshell
daisy-img-modify.sh,so add the input argument -w workdir
in daisy-img-modify.sh.
1.add an input argument -w workdir in daisy-img-modify.sh.
2.modify the WORKDIR default value to /tmp/workdir/daisy.
3.code refactor to put the centos7.qcow2 file clean statement
from the deploy.sh to daisy-img-modify.sh.
Change-Id: Id375a15ad2839c209329e644c5e032d044604e7d
Signed-off-by: zhongjun <zhong.jun@zte.com.cn>
|
|
Change-Id: Ic9e19d4e120fc53d96d0794239cd6e421f25ea27
Signed-off-by: zhouya <zhou.ya@zte.com.cn>
|
|
refactor the deploy.sh variable name
1.unify the glabs var name to upper case.
2.refactor the variable of target_node_net/daisy_server_net
to add prefix VMDEPLOY for code readablity.
3.refactor the usage() to function usage for uniform the
code style.
Change-Id: Ibd7ea91ac8b19cd7147e3a7d97b3359880cec59c
Signed-off-by: zhongjun <zhong.jun@zte.com.cn>
|
|
In prepare_install function of tempest.py:
1. get_config.py needs dha and network parameters
2. install_os_for_vm_step2 function needs cluster_id parameter
Change-Id: Idb13f71ced76f0d99dcbe818cdac3d3f2eb7d5df
Signed-off-by: Alex Yang <yangyang1@zte.com.cn>
|
|
|
|
|
|
Change-Id: I3bcc1e6d9cbcb2974fc9246a3b1559f9b988d530
Signed-off-by: zhouya <zhou.ya@zte.com.cn>
|
|
Change-Id: I1fb6036a2805ccb9bdbe23622514ccd9d997c1a5
Signed-off-by: SerenaFeng <feng.xiaowei@zte.com.cn>
|