Age | Commit message (Collapse) | Author | Files | Lines |
|
Baremetal is failing because ironic takes long to transfer the image to the
hard drive of the nodes
Change-Id: Ief704e92307d1ea7fe55ee0268abae49e0126503
Signed-off-by: Manuel Buil <mbuil@suse.com>
|
|
The IDF files contain DNS information so we should respect that when
we configure the various XCI nodes. The DNS information is also a
list instead of a string so treat it as such.
Change-Id: I1c4d5eb600baaca35b2838dcafa7a75e59bf6783
Signed-off-by: Markos Chandras <mchandras@suse.de>
|
|
We are using two variables which have a similar scope:
- create_image_via_dib
- use_prebuilt_images
We could use one of them and not both. create_image_via_dib is selected
because it also exists in upstream bifrost
use_prebuilt_images = false is the same as create_image_via_dib = true
use_prebuilt_images = true is the same as create_image_via_dib = false
Change-Id: Ieaab78f1dc2d199746a2b13ebc82e9dc615d92e9
Signed-off-by: Manuel Buil <mbuil@suse.com>
|
|
Information about baremetal servers is collected for ironic to do
the provisioning. Two main things are done:
1 - baremetalhoststojson.yml fills the json config file fed to ironic
so that it knows how to boot the blades. In the baremetal case, the
create_vm.yml playbook will only create opnfv vm. The variable
vms_to_create holds that information. The variable baremetal_nodes
specifies the physical nodes (empty when non baremetal deployments)
2 - For PXE to work, we create a file called baremetalstaticips that
has the mapping between mac address from servers and its ip. That file
is moved into the dnsmasq config directory
Change-Id: I0e788db1deb50769c183b71524a68ac0b925f8aa
Signed-off-by: Manuel Buil <mbuil@suse.com>
|
|
opnfv vm requires connectivity to two physical interfaces of the host.
These interfaces are:
1 - admin, where DHCP requests will arrive from blades to do PXE boot
2 - mgmt, which connects to the mgmt of the blades to do the ansible
configuration
To achive this, it is required:
1 - Two libvirt networks that connect to two different linux bridges.
The important physical interfaces are connected to them. The
interfaces name is fetched from the idf
2 - Two templates representing the new libvirt networks
(net-mgmt.xml.j2 and net-admin.xml.j2)
3 - Two interfaces defined in vm.xml.j2
Change-Id: I9037aa36802cfde44717b9394bab79b22d7dfaab
Signed-off-by: Manuel Buil <mbuil@suse.com>
|
|
Change some of the variables when calling opnfv-virtual.yml and
baremetal=true:
1 - We want to configure DHCP mapping between ip and mac when doing
baremetal. The create-nodes role will generate a file with that
mapping for us (baremetalstaticips)
2 - Don't download the standard IPA image but build one with Fedora
(only one that works in ericsson-pod2) when doing baremetal
3 - Wait for the blade to complete its booting. Its ssh port becomes
available with the IPA provision but that is not the final state.
We need to wait until the required distro gets installed.
When not doing baremetal, this is fine as the VMs boot very fast with
the chosen distro but for baremetal it takes a while (ericsson-pod2
servers take around 2 minutes to finish all BIOS booting). The playbook
wait-for-baremetal.yml does this.
Change-Id: I5536517209ff7f46ec034554d29566707778e397
Signed-off-by: Manuel Buil <mbuil@suse.com>
|
|
There is a bug when ironic communicates with rabbitmq due to a parameter
deprecation. This patch fixes it:
https://review.openstack.org/#/c/609499/
And we can take the opportunity to update all SHAs
THere is a problem:
"Unable to retrieve file contents\nCould not find or access '/home/opnfv/releng-xci/xci/infra/bifrost/playbooks/roles/common/venv_python_path.yml'"}
That file is in:
/home/opnfv/releng-xci/.cache/repos/bifrost/playbooks/roles/common/venv_python_path.yml
As I am not sure how to fix the ansible PATH, for the time being, I just
added to where Ansible is searching for it
Change-Id: I8e60f43ed7fc78a8925efaa36e41b0d872ea9a74
Signed-off-by: Manuel Buil <mbuil@suse.com>
|
|
|
|
The creation or not of the IPA image does not depend on the value of the
use_prebuilt_images. This variable is intended to control the following
call to bifrost-create-dib-image role.
I added a few comments to clarify what we are doing in each call to the
bifrost-create-dib-image role
Change-Id: Id66e1a969ca279a055640481719f118744eedf38
Signed-off-by: Manuel Buil <mbuil@suse.com>
|
|
A few playbooks and the create-vm-nodes role should change the name to
reflect the new reality once the baremetal patches are merged. The
playbooks that must change the name are:
- xci-prepare-virtual.yml
- xci-create-virtual.yml
Change-Id: Iaed1f93561fa9d39c7916e0643a5445cdddf4f97
Signed-off-by: Manuel Buil <mbuil@suse.com>
|
|
Physical hardware PODs provide a pdf and a idf to describe hardware and other
information (e.g. what is the purpose for each interface). To reuse the
same code for opnfv vm and also become consistent, we should also describe
the opnfv vm with an idf and a pdf. This patch simplifies what needs to
be done for baremetal, especially for this (future) patch:
https://gerrit.opnfv.org/gerrit/#/c/60797/11
As we add an idf, we should update dynamic_inventory and how we create
the opnfv vm. Obviously, he opnfv_vm.yml gets removed.
Change-Id: I930728474631fc214e4a9adc8581e0c16d230176
Signed-off-by: Manuel Buil <mbuil@suse.com>
|
|
This patch complements this other:
https://gerrit.opnfv.org/gerrit/#/c/62575/2
We require the pdf and the idf (when doing baremetal) in the create-vm
role, so we should propagate that variable to the playbook that triggers
those roles
Change-Id: I15806d386db4e6b11192829f2dbc61662bffec2b
Signed-off-by: Manuel Buil <mbuil@suse.com>
|
|
Deployers may want to use a different DNS server so allow them to
override the ipv4_nameserver option. If the variable is not set,
then we use the libvirt DNS if we are behind a proxy, otherwise
we default to the Google DNS.
Change-Id: I96cf63758902d4aae3d155b2e8beef650449ebc9
Signed-off-by: Markos Chandras <mchandras@suse.de>
|
|
This patch is doing the first work item of the spec:
https://github.com/opnfv/releng-xci/blob/master/docs/specs/infra_manager.rst
It creates the required VMs by XCI to afterwards deploy the VIM. It does that
by reading the pdf provided by the user.
- It is currently assumed that the OS for the VM will be installed in the first
disk of the node described by the pdf
- It is assumed that the opnfv VM characteristics are not described in the pdf
but in a similar document called opnfv_vm.yml
- All references to csv from bifrost-create-vm-nodes were removed
Change-Id: I46a85284e4ce7df21cbf66f66619b35f74251e68
Signed-off-by: Manuel Buil <mbuil@suse.com>
Co-Authored-by: Markos Chandras <mchandras@suse.de>
|
|
Commit 0d332a80cf731e5927c81c9f6929a8b83d43cddd ("Add proxy support")
switched the default DNS server to the libvirt bridge. However, we only
need to override the default DNS if we are behind a proxy server.
Change-Id: I7d8fe8c10a1aba2db4a703a81e74ef76fa593d95
Signed-off-by: Markos Chandras <mchandras@suse.de>
|
|
This change removes the variables that are not used in any of the
playbooks/roles from opnfv ansible vars.
Apart from that, all caps ansible vars replaced with lowercase ones
and impacted playbooks/roles are updated.
installer-type:osa
deploy-scenario:os-nosdn-nofeature
Change-Id: I99ebdc155b3903176ac5940b64cef0c0f3aa0f0d
Signed-off-by: Fatih Degirmenci <fdegir@gmail.com>
|
|
In some cases the XCI development environment can be located behind a
corporate proxy resulting in a additional layer to consider to
configure. These changes pretend to include proxy support for all
linux distros in all the posible flavors.
Change-Id: Iab469268809ac471d09e244bb3ccd83de1a41b88
Signed-off-by: Victor Morales <victor.morales@intel.com>
|
|
bifrost is currently the only way to deploy the infrastructure but
in the future other solutions will be added so we need to do some
preparation for XCI integration.
Change-Id: I961dd42157c924d88747074ddba6a318f8b537ac
Signed-off-by: Markos Chandras <mchandras@suse.de>
|