Age | Commit message (Collapse) | Author | Files | Lines |
|
The name of the file should have baremetal not barematal
Change-Id: I15d70b69943e8ce3032c76d1cd7bc7272a6b8d56
Signed-off-by: Manuel Buil <mbuil@suse.com>
|
|
Moving to the newer SHAs of stable/rocky
installer-type:osa
Change-Id: I89de6554d5e3bef8b2b49c6a3e621d3ca3a6f4dc
Signed-off-by: Manuel Buil <mbuil@suse.com>
|
|
To read the idf when the installer is kubespray, we need the network
details too
Change-Id: Idb9b0a4338a224e146abc78690067659bc94c302
Signed-off-by: Manuel Buil <mbuil@suse.com>
|
|
The PDF/IDF filenames to use during deployments in CI will be generated
dynamically based on which slave the job is running on with the help of
the SLAVE_NAME environment variable Jenkins injects into job environment.
It will probably look like this
pdf=var/pdf.yml
idf=var/idf.yml
if [[ "$SLAVE_NAME" !~ virtual ]]; then
pdf=var/${SLAVE_NAME}-pdf.yml
pdf=var/${SLAVE_NAME}-idf.yml
fi
./xci-deploy.sh -i $idf -p $pdf
deploy-scenario:os-nosdn-nofeature
installer-type:osa
Change-Id: Ief319ee36292ca888b97e4059a26337ee98dfef2
Signed-off-by: Fatih Degirmenci <fdegir@gmail.com>
|
|
The IDF files contain DNS information so we should respect that when
we configure the various XCI nodes. The DNS information is also a
list instead of a string so treat it as such.
Change-Id: I1c4d5eb600baaca35b2838dcafa7a75e59bf6783
Signed-off-by: Markos Chandras <mchandras@suse.de>
|
|
|
|
|
|
|
|
|
|
We are deploying the NFS server in computes. In all flavors, node2
takes the role of compute00 and thus the NFS server is in
172.29.244.12. Therefore, the openstack config for ha is wrong
Change-Id: I5e82ddd670b44e291c0b866ba4fde57e74b68643
Signed-off-by: Manuel Buil <mbuil@suse.com>
|
|
When deploying baremetal, the traffic should be segmented using the
different interfaces instead of through vlans. All the config is done
based on idf and pdf information.
When doing non-baremetal, opnfv and nodes get the same config. When
doing baremetal, opnfv and nodes get a different network config
Change-Id: I23aa576bc782c7c69d511a5558827110c37b558a
Signed-off-by: Manuel Buil <mbuil@suse.com>
|
|
When deploying baremetal, the traffic should be segmented using the
different interfaces instead of through vlans. All the config is done
based on idf and pdf information.
When doing non-baremetal, opnfv and nodes get the same config.
When doing baremetal, opnfv and nodes get a different network config
Apart from that, if vlan_id is defined in the name, there is no need
for VLAN_ID in the interface descriptor. This simplifies things
Change-Id: Iddbb90af807b43e247e5ee11fe735df9e823d4bf
Signed-off-by: Manuel Buil <mbuil@suse.com>
|
|
|
|
Bifrost is exposing quite a few variables so we need to collect them
as well.
Change-Id: I7e7ca7a093f35a0acb53af360e58444f6c1de7e4
Signed-off-by: Markos Chandras <mchandras@suse.de>
|
|
The current configuration only dumps the booting info into a file and
the pty does not work (i.e. virsh console opnfv returns a failure
because it cannot find a character device). After some investigations,
it is apparently impossible to have both active:
https://github.com/Mirantis/virtlet/issues/249
Therefore, we should remove the pty part of the xml. To connect to the
VM in case of network problems, we can always use vnc.
Apart from that, the console part is not necessary as libvirt will
create that one for us
Change-Id: I80a59163b4ba4e6bff34cb5378893201e93ddb87
Signed-off-by: Manuel Buil <mbuil@suse.com>
|
|
installer-type:kubespray
deploy-scenario:k8-nosdn-nofeature
Change-Id: I6b59df5112e9b3459bf3147557f5f22fe0fb778b
Signed-off-by: Markos Chandras <mchandras@suse.de>
|
|
Change-Id: I50c367433dc8cf8964c291c916ea939e25f638cb
Signed-off-by: Markos Chandras <mchandras@suse.de>
|
|
When we install packages from the distro repos we need to make sure
that the database is updated. This also takes SUSE's zypper pkg manager
into consideration which can benefit from the same Ansible option.
Change-Id: I7a2206bfc5827b9ccb448278759711c560bb4679
Signed-off-by: Markos Chandras <mchandras@suse.de>
|
|
Currently we were taking the names of the nodes to be generated from the
NODE_NAMES variable whereas ansible was fetching the names from the
dynamic_inventory.py which uses idf. This resulted in problems: when
doing ha, ansible was provisioning a compute as a controller and
vicebersa.
This patch forces create_nodes role to fetch the name from idf and thus
align with the naming schema of ansible
deploy-scenario:k8-calico-nofeature
installer-type:kubespray
Change-Id: Id1473727405701fd9ed0cb2f1394ee8676cec337
Signed-off-by: Manuel Buil <mbuil@suse.com>
|
|
Right now, we only support ipmi hosts (either virtual or physical) and
that is why our json is always describing the ipmi parameters. It does
not make sense that we have a variable which would allow to change that
Change-Id: I7b88aca5930a73d68342e3d4cf21f9e96286c4d7
Signed-off-by: Manuel Buil <mbuil@suse.com>
|
|
The variable should be host_group which is generated at line 32 and 35
Change-Id: I7add3af73198ec0638dee0c8f189a3a372a78ee8
Signed-off-by: Manuel Buil <mbuil@suse.com>
|
|
Since the patch:
https://gerrit.opnfv.org/gerrit/#/c/63173/
The variable BIFROST_USE_PREBUILT_IMAGES has changed to
BIFROST_CREATE_IMAGE_VIA_DIB. As jenkins does not trigger testing jobs
when editing file in xci/scripts/, this change is done in a separate
patch
Change-Id: I3bc285936fae5b7514272ca0ad2418b60446e4aa
Signed-off-by: Manuel Buil <mbuil@suse.com>
|
|
|
|
|
|
|
|
Since this week we have two sfc scenarios:
- os-odl-sfc
- os-odl-sfc_osm
When $DEPLOY_SCENARIO=os-odl-sfc, using the current code testing_role
gets two values:
os-odl-sfc
os-odl-sfc_osm
This patch adds '$' character to prevent matching scenarios which
concatenate characters after $DEPLOY_SCENARIO
Change-Id: Ia0782362da04e8b3ecd2ec6f13ccc8c404797fda
Signed-off-by: Manuel Buil <mbuil@suse.com>
|
|
We are using two variables which have a similar scope:
- create_image_via_dib
- use_prebuilt_images
We could use one of them and not both. create_image_via_dib is selected
because it also exists in upstream bifrost
use_prebuilt_images = false is the same as create_image_via_dib = true
use_prebuilt_images = true is the same as create_image_via_dib = false
Change-Id: Ieaab78f1dc2d199746a2b13ebc82e9dc615d92e9
Signed-off-by: Manuel Buil <mbuil@suse.com>
|
|
We need a commit for SFC scenario:
https://github.com/openstack/openstack-ansible-os_neutron/commit/200fa4a7aaa15a6d6758418eafffe093174d2f72
Change-Id: Ia497a49a910d16eaf3c7ee896f0f75aab812bd7a
Signed-off-by: Manuel Buil <mbuil@suse.com>
|
|
We need it for baremetal support in opensuse:
https://github.com/openstack/bifrost/commit/0f605cd723a68e2c2bb9b30a15a08e5aba777bd5
We move all related repos from Rocky to master (problems if we SHA bump ironic, etc in
Rocky while Bifrost in master)
Change-Id: Icf0dd58c6fc6cc8f221d37a6ed3f3746f6577716
Signed-off-by: Manuel Buil <mbuil@suse.com>
|
|
|
|
https://gerrit.opnfv.org/gerrit/#/c/63505/
The os-odl-sfc_osm scenario has been verified in the previous
patchsets of this commit. This patch needs to be merged after
adding OS variables to the post-deployment playbook.
https://gerrit.opnfv.org/gerrit/#/c/63135/
installer-type:osa
deploy-scenario:os-odl-sfc_osm
Change-Id: I14e83357c9e1db6e31890b5f126b9e405e124594
Signed-off-by: Venkata Harshavardhan Reddy Allu <venkataharshavardhan_ven@srmuniv.edu.in>
|
|
OSM needs access to OpenStack credentials and authentication url.
This patch would provide that using the xci_flavor specific
user_variables.yml file. This patch is necessary to complete the
initial integration of OSM.
installer-type:osa
deploy-scenario:os-nosdn-osm
Change-Id: Ic228b323fc69ce46c8093f6dfbc116d8c71a0391
Signed-off-by: Venkata Harshavardhan Reddy Allu <venkataharshavardhan_ven@srmuniv.edu.in>
|
|
The barbican OS component is introduced for SFC HA deployed scenario.
The reason behind is that In HA scenarios we need Openstack Barbican to
gather and store the fernet keys so Tacker can access them and be able to
register new VIMs.
deploy-scenario:os-odl-sfc
installer-type:osa
JIRA: SFC-131
Change-Id: Ife416fb2a7dc04ddadc93f962695aee4ed448501
Signed-off-by: Panagiotis Karalis <pkaralis@intracom-telecom.com>
|
|
|
|
Since Rocky, networking-odl depends on ceilometer and requires it to be
installed. Therefore, all odl scenarios need to have ceilometer
deployed. Once that is done, we can unfreeze n-odl repo.
Besides, we need to introduce a SHA bump for neutron and ceilometer to
include the latest changes to support this fix
Ceilometer should be git cloned always, otherwise repo_build will fail
as ceilometer is now part of requirements.txt
[mchandras: Instead of just bumping selective network related roles,
lets just do a complete sha bump for stable/rocky]
deploy-scenario:os-odl-sfc
installer-type:osa
Change-Id: I81a39436e4ff648faabda4e82fce1d3f14615741
Signed-off-by: Manuel Buil <mbuil@suse.com>
Signed-off-by: Markos Chandras <mchandras@suse.de>
|
|
|
|
ODL testcases in functest require new variables:
https://git.opnfv.org/functest/tree/functest/utils/env.py#n22
Otherwise the test fails because it cannot contact ODL
To find those variables, we fetch the ml2_conf.ini file from the neutron
container and then we parse the values. This is only run when the
scenario has odl
deploy-scenario:os-odl-sfc
installer-type:osa
Change-Id: If175bb7642e66e151b30e1ccd1b9040aa3481d8f
Signed-off-by: Manuel Buil <mbuil@suse.com>
|
|
Information about baremetal servers is collected for ironic to do
the provisioning. Two main things are done:
1 - baremetalhoststojson.yml fills the json config file fed to ironic
so that it knows how to boot the blades. In the baremetal case, the
create_vm.yml playbook will only create opnfv vm. The variable
vms_to_create holds that information. The variable baremetal_nodes
specifies the physical nodes (empty when non baremetal deployments)
2 - For PXE to work, we create a file called baremetalstaticips that
has the mapping between mac address from servers and its ip. That file
is moved into the dnsmasq config directory
Change-Id: I0e788db1deb50769c183b71524a68ac0b925f8aa
Signed-off-by: Manuel Buil <mbuil@suse.com>
|
|
opnfv vm requires connectivity to two physical interfaces of the host.
These interfaces are:
1 - admin, where DHCP requests will arrive from blades to do PXE boot
2 - mgmt, which connects to the mgmt of the blades to do the ansible
configuration
To achive this, it is required:
1 - Two libvirt networks that connect to two different linux bridges.
The important physical interfaces are connected to them. The
interfaces name is fetched from the idf
2 - Two templates representing the new libvirt networks
(net-mgmt.xml.j2 and net-admin.xml.j2)
3 - Two interfaces defined in vm.xml.j2
Change-Id: I9037aa36802cfde44717b9394bab79b22d7dfaab
Signed-off-by: Manuel Buil <mbuil@suse.com>
|
|
Change some of the variables when calling opnfv-virtual.yml and
baremetal=true:
1 - We want to configure DHCP mapping between ip and mac when doing
baremetal. The create-nodes role will generate a file with that
mapping for us (baremetalstaticips)
2 - Don't download the standard IPA image but build one with Fedora
(only one that works in ericsson-pod2) when doing baremetal
3 - Wait for the blade to complete its booting. Its ssh port becomes
available with the IPA provision but that is not the final state.
We need to wait until the required distro gets installed.
When not doing baremetal, this is fine as the VMs boot very fast with
the chosen distro but for baremetal it takes a while (ericsson-pod2
servers take around 2 minutes to finish all BIOS booting). The playbook
wait-for-baremetal.yml does this.
Change-Id: I5536517209ff7f46ec034554d29566707778e397
Signed-off-by: Manuel Buil <mbuil@suse.com>
|
|
|
|
We were right now creating an overlay network with gateway in the opnfv
vm. This makes floating ips work but if we need internet connection from
vms, things get complicated and messy.
As we are only using up to 7 ips from the 192.168.122.0/24, we can
consider it the provider network for openstack and use ips starting from
192.168.122.100 as floating ips or ips for routing (doing SNAT as
always)
Change-Id: I09af663069ae95a9d265d98f1531778eb37134e2
Signed-off-by: Manuel Buil <mbuil@suse.com>
|
|
deploy-scenario:k8-calico-nofeature
installer-type:kubespray
Change-Id: If1c9f5908f39f9c09efb86e27a3f3883b4cd75b9
Signed-off-by: Manuel Buil <mbuil@suse.com>
|
|
There is a bug when ironic communicates with rabbitmq due to a parameter
deprecation. This patch fixes it:
https://review.openstack.org/#/c/609499/
And we can take the opportunity to update all SHAs
THere is a problem:
"Unable to retrieve file contents\nCould not find or access '/home/opnfv/releng-xci/xci/infra/bifrost/playbooks/roles/common/venv_python_path.yml'"}
That file is in:
/home/opnfv/releng-xci/.cache/repos/bifrost/playbooks/roles/common/venv_python_path.yml
As I am not sure how to fix the ansible PATH, for the time being, I just
added to where Ansible is searching for it
Change-Id: I8e60f43ed7fc78a8925efaa36e41b0d872ea9a74
Signed-off-by: Manuel Buil <mbuil@suse.com>
|
|
|
|
The creation or not of the IPA image does not depend on the value of the
use_prebuilt_images. This variable is intended to control the following
call to bifrost-create-dib-image role.
I added a few comments to clarify what we are doing in each call to the
bifrost-create-dib-image role
Change-Id: Id66e1a969ca279a055640481719f118744eedf38
Signed-off-by: Manuel Buil <mbuil@suse.com>
|
|
A few playbooks and the create-vm-nodes role should change the name to
reflect the new reality once the baremetal patches are merged. The
playbooks that must change the name are:
- xci-prepare-virtual.yml
- xci-create-virtual.yml
Change-Id: Iaed1f93561fa9d39c7916e0643a5445cdddf4f97
Signed-off-by: Manuel Buil <mbuil@suse.com>
|
|
It switches between parts of the code which are specific for baremetal
or non-baremetal. Those parts come with this patch:
https://gerrit.opnfv.org/gerrit/#/c/60797
It also selects different variables when calling the opnfv-virtual.yml
playbook:
https://gerrit.opnfv.org/gerrit/#/c/60795
It decides the value of BAREMETAL based on the vendor value of the pdf
Change-Id: I8e6171f4f21db7f814a472e6ed1bacb30220b4ec
Signed-off-by: Manuel Buil <mbuil@suse.com>
|
|
Physical hardware PODs provide a pdf and a idf to describe hardware and other
information (e.g. what is the purpose for each interface). To reuse the
same code for opnfv vm and also become consistent, we should also describe
the opnfv vm with an idf and a pdf. This patch simplifies what needs to
be done for baremetal, especially for this (future) patch:
https://gerrit.opnfv.org/gerrit/#/c/60797/11
As we add an idf, we should update dynamic_inventory and how we create
the opnfv vm. Obviously, he opnfv_vm.yml gets removed.
Change-Id: I930728474631fc214e4a9adc8581e0c16d230176
Signed-off-by: Manuel Buil <mbuil@suse.com>
|
|
This patch complements this other:
https://gerrit.opnfv.org/gerrit/#/c/62575/2
We require the pdf and the idf (when doing baremetal) in the create-vm
role, so we should propagate that variable to the playbook that triggers
those roles
Change-Id: I15806d386db4e6b11192829f2dbc61662bffec2b
Signed-off-by: Manuel Buil <mbuil@suse.com>
|