Age | Commit message (Collapse) | Author | Files | Lines |
|
The current configuration only dumps the booting info into a file and
the pty does not work (i.e. virsh console opnfv returns a failure
because it cannot find a character device). After some investigations,
it is apparently impossible to have both active:
https://github.com/Mirantis/virtlet/issues/249
Therefore, we should remove the pty part of the xml. To connect to the
VM in case of network problems, we can always use vnc.
Apart from that, the console part is not necessary as libvirt will
create that one for us
Change-Id: I80a59163b4ba4e6bff34cb5378893201e93ddb87
Signed-off-by: Manuel Buil <mbuil@suse.com>
|
|
Currently we were taking the names of the nodes to be generated from the
NODE_NAMES variable whereas ansible was fetching the names from the
dynamic_inventory.py which uses idf. This resulted in problems: when
doing ha, ansible was provisioning a compute as a controller and
vicebersa.
This patch forces create_nodes role to fetch the name from idf and thus
align with the naming schema of ansible
deploy-scenario:k8-calico-nofeature
installer-type:kubespray
Change-Id: Id1473727405701fd9ed0cb2f1394ee8676cec337
Signed-off-by: Manuel Buil <mbuil@suse.com>
|
|
Right now, we only support ipmi hosts (either virtual or physical) and
that is why our json is always describing the ipmi parameters. It does
not make sense that we have a variable which would allow to change that
Change-Id: I7b88aca5930a73d68342e3d4cf21f9e96286c4d7
Signed-off-by: Manuel Buil <mbuil@suse.com>
|
|
The variable should be host_group which is generated at line 32 and 35
Change-Id: I7add3af73198ec0638dee0c8f189a3a372a78ee8
Signed-off-by: Manuel Buil <mbuil@suse.com>
|
|
|
|
|
|
ODL testcases in functest require new variables:
https://git.opnfv.org/functest/tree/functest/utils/env.py#n22
Otherwise the test fails because it cannot contact ODL
To find those variables, we fetch the ml2_conf.ini file from the neutron
container and then we parse the values. This is only run when the
scenario has odl
deploy-scenario:os-odl-sfc
installer-type:osa
Change-Id: If175bb7642e66e151b30e1ccd1b9040aa3481d8f
Signed-off-by: Manuel Buil <mbuil@suse.com>
|
|
Information about baremetal servers is collected for ironic to do
the provisioning. Two main things are done:
1 - baremetalhoststojson.yml fills the json config file fed to ironic
so that it knows how to boot the blades. In the baremetal case, the
create_vm.yml playbook will only create opnfv vm. The variable
vms_to_create holds that information. The variable baremetal_nodes
specifies the physical nodes (empty when non baremetal deployments)
2 - For PXE to work, we create a file called baremetalstaticips that
has the mapping between mac address from servers and its ip. That file
is moved into the dnsmasq config directory
Change-Id: I0e788db1deb50769c183b71524a68ac0b925f8aa
Signed-off-by: Manuel Buil <mbuil@suse.com>
|
|
opnfv vm requires connectivity to two physical interfaces of the host.
These interfaces are:
1 - admin, where DHCP requests will arrive from blades to do PXE boot
2 - mgmt, which connects to the mgmt of the blades to do the ansible
configuration
To achive this, it is required:
1 - Two libvirt networks that connect to two different linux bridges.
The important physical interfaces are connected to them. The
interfaces name is fetched from the idf
2 - Two templates representing the new libvirt networks
(net-mgmt.xml.j2 and net-admin.xml.j2)
3 - Two interfaces defined in vm.xml.j2
Change-Id: I9037aa36802cfde44717b9394bab79b22d7dfaab
Signed-off-by: Manuel Buil <mbuil@suse.com>
|
|
|
|
We were right now creating an overlay network with gateway in the opnfv
vm. This makes floating ips work but if we need internet connection from
vms, things get complicated and messy.
As we are only using up to 7 ips from the 192.168.122.0/24, we can
consider it the provider network for openstack and use ips starting from
192.168.122.100 as floating ips or ips for routing (doing SNAT as
always)
Change-Id: I09af663069ae95a9d265d98f1531778eb37134e2
Signed-off-by: Manuel Buil <mbuil@suse.com>
|
|
deploy-scenario:k8-calico-nofeature
installer-type:kubespray
Change-Id: If1c9f5908f39f9c09efb86e27a3f3883b4cd75b9
Signed-off-by: Manuel Buil <mbuil@suse.com>
|
|
A few playbooks and the create-vm-nodes role should change the name to
reflect the new reality once the baremetal patches are merged. The
playbooks that must change the name are:
- xci-prepare-virtual.yml
- xci-create-virtual.yml
Change-Id: Iaed1f93561fa9d39c7916e0643a5445cdddf4f97
Signed-off-by: Manuel Buil <mbuil@suse.com>
|
|
Physical hardware PODs provide a pdf and a idf to describe hardware and other
information (e.g. what is the purpose for each interface). To reuse the
same code for opnfv vm and also become consistent, we should also describe
the opnfv vm with an idf and a pdf. This patch simplifies what needs to
be done for baremetal, especially for this (future) patch:
https://gerrit.opnfv.org/gerrit/#/c/60797/11
As we add an idf, we should update dynamic_inventory and how we create
the opnfv vm. Obviously, he opnfv_vm.yml gets removed.
Change-Id: I930728474631fc214e4a9adc8581e0c16d230176
Signed-off-by: Manuel Buil <mbuil@suse.com>
|
|
A new var FUNCTEST_VERSION is introduced to jobs to control the
version to use for Functest.
Change-Id: Ice7aa9f910db2353ce3d0bef198bef9fa3efe9fd
Signed-off-by: Fatih Degirmenci <fdegir@gmail.com>
|
|
OSM requires a CA even when we create a self-signed certificate. We
don't actually need to do that since HAproxy and friends can create the
whole chain for us, so we can finally get rid of this playbook.
installer-type:osa
deploy-scenario:os-nosdn-nofeature
Change-Id: I14a3adbe3492cd6c562c5167c42dd45756e8e3dd
Signed-off-by: Markos Chandras <mchandras@suse.de>
|
|
deploy-scenario:k8-nosdn-nofeature
installer-type:kubespray
Change-Id: Ieb531b66bd36bbf8c28f755a52a98f0b41ae5efa
Signed-off-by: Fatih Degirmenci <fdegir@gmail.com>
|
|
|
|
installer-type:kubespray
deploy-scenario:k8-nosdn-nofeature
Change-Id: If81aef632b064565fbf5c308909b44ff7409c33e
Signed-off-by: Fatih Degirmenci <fdegir@gmail.com>
|
|
The openrc should contain the path to OS_CACERT within container, not
on opnfv vm.
Change-Id: Ief4cb4ae647ff0f2cd4f3ebe8a2993bb71b0363f
Signed-off-by: Fatih Degirmenci <fdegir@gmail.com>
|
|
The created script runs the yardstick in similar way the functest
script does.
installer-type:osa
deploy-scenario:os-nosdn-nofeature
Change-Id: Ic03445ec03fcfec8dc0d09f638e7cb1187fef883
Signed-off-by: Fatih Degirmenci <fdegir@gmail.com>
|
|
This change renames functest prepare-tole to prepare tests and makes other
adjustments to the role to move common test preparation steps to its own
script so we can prepare for Functest and Yardstick at one go.
Similar things are required to be prepared for running Functest and Yardstick
such as
- installed packages
- external network creation
- creation of run-functest.sh and run-yardstick scripts from templates
- preparation of environment variables
This change will fail verification until the changes below is submitted.
https://gerrit.opnfv.org/gerrit/#/c/61645/
Change-Id: Id1020d3e61abd3f087863c06a132c5021339d655
Signed-off-by: Fatih Degirmenci <fdegir@gmail.com>
|
|
|
|
|
|
Functest download_images.sh script downloads images that are
not needed for functest-smoke so we only download the necessary
images to cut the time down.
deploy-scenario:os-nosdn-nofeature
installer-type:osa
Change-Id: I0be643c4ccd4b8009e68433f5d635231afd2550a
Signed-off-by: Fatih Degirmenci <fdegir@gmail.com>
|
|
|
|
The role used to get the output of 'virsh list --all' to determine how
many VMs are present and shut off in the system. This takes *all* VMs
on the system into consideration so it may skip creation of some or all
of the XCI VMs if we happen to have other VMs present. We can improve
the situation by simply dropping this check and always provision the
VMs we want. If the VM is already present, then the module will simply
do a sanity check of its configuration. This allows XCI to run alongside
other VMs.
Change-Id: I54255a1959509671c0305f48f23a55b6e900684f
Signed-off-by: Markos Chandras <mchandras@suse.de>
|
|
According to OpenStack admin is the network for pxe boot and mgmt is the
network for OpenStack services to communicate. We were using both in XCI
indistinctly
Change-Id: I3959e767098ac2be7161a5e84735fde9ab129784
Signed-off-by: Manuel Buil <mbuil@suse.com>
|
|
Functest Smoke test requires additional images to be available
during testing. One of the images is decompressed using xz and we
need this to be available on deployment host in order to be able
to have the images available for Functest execution.
Change-Id: I5647b3bef37fc55e8c5cc9aec5d0b2c3ea628b8a
Signed-off-by: Fatih Degirmenci <fdegir@gmail.com>
|
|
This change
- bumps OSA SHA to cbfdb7dc295ff702044b807336fab067d84a3f20
(mostly based on Rocky RC1)
- bumps bifrost SHA to c1c6fb7487d5b967624400623fd35aabf303b917
- pins Ansible to 2.4.6.0
- switches to ollivier/functest-healtcheck since OS is bumped to Rocky
Change-Id: Icc14e3e794b489dafd78b426c54051a3732ccb1a
Signed-off-by: Fatih Degirmenci <fdegir@gmail.com>
Signed-off-by: Markos Chandras <mchandras@suse.de>
|
|
Version 1.4 brings in additional dependencies and it's currently
not used in OpenStack anyway. So lets stick to 1.3 for now.
Change-Id: I2489168cae12f7fa3271c2de7d4fcf37bdb97810
Signed-off-by: Markos Chandras <mchandras@suse.de>
|
|
* changes:
xci: xci-destroy-env.sh: Update virtualbmc path
xci: create-vm-nodes: Install virtualbmc in the XCI virtualenv
xci: osa: Drop openSUSE mirror variables
|
|
XCI prepares a virtualenv for us, so we should install virtualbmc
in it.
Change-Id: I320d1c7cad9c5c821269b55252cb7ab4f5136f40
Signed-off-by: Markos Chandras <mchandras@suse.de>
|
|
OPNFV VM creation is separated from the rest of the nodes, resulting
in its disk being left without resizing. It is important to resize
its disk to the value defined in its PDF opnfv_vm.yml so the installation
of the other tools does not fail due to lack of space.
Change-Id: I8300e6e355d11788cc983fcebca56076e89918e1
Signed-off-by: Fatih Degirmenci <fdegir@gmail.com>
|
|
This patch is doing the first work item of the spec:
https://github.com/opnfv/releng-xci/blob/master/docs/specs/infra_manager.rst
It creates the required VMs by XCI to afterwards deploy the VIM. It does that
by reading the pdf provided by the user.
- It is currently assumed that the OS for the VM will be installed in the first
disk of the node described by the pdf
- It is assumed that the opnfv VM characteristics are not described in the pdf
but in a similar document called opnfv_vm.yml
- All references to csv from bifrost-create-vm-nodes were removed
Change-Id: I46a85284e4ce7df21cbf66f66619b35f74251e68
Signed-off-by: Manuel Buil <mbuil@suse.com>
Co-Authored-by: Markos Chandras <mchandras@suse.de>
|
|
Add a new role based on the bifrost one to create nodes for the bifrost
virtual deployments. This role will install and configure libvirt on the
host, download a prebuilt OPNFV VM image and deploy the OPNFV VM using
that image. Moreover, it will create the rest of the nodes for the
virtual deployment which will be configured by bifrost later on.
Change-Id: I9fbd084261351d3b53ae373060f43df046191c5e
Co-Authored-by: Markos Chandras <mchandras@suse.de>
Signed-off-by: Manuel Buil <mbuil@suse.com>
|
|
Functest changes have significant impact which blocks everything in XCI so this
change pins the image to a known sha to get the original set of healthcheck
testcases until the impacts are analysed and concerns are raised to Functest
and the wider OPNFV Community and addressed based on community consensus.
Pinned version of functest-healthcheck contains the test cases below.
- connection_check
- api_check
- snaps_health_check
deploy-scenario:os-nosdn-nofeature
installer-type:osa
Change-Id: Ic9222af8c27e58491b7b60a7504df9d792b5e753
Signed-off-by: Fatih Degirmenci <fdegir@gmail.com>
|
|
We are configuring static IPs in the various nodes but we don't do
anything for DNS assuming that DNS is being configured by another
entity. However, the IDF file already contains DNS information for us
so we should use that instead. Moreover, we update the IDF file to use
the gateway as DNS instead of the Google one in order to make it more
usable on restricted networks.
Change-Id: Ieba58ec9558080a1296e204c4f99bae859e9daef
Signed-off-by: Markos Chandras <mchandras@suse.de>
|
|
|
|
The bootstrap role configures NTP and networking on hosts so we
should use it on k8s deployments as well.
installer-type:kubespray
deploy-scenario:k8-nosdn-nofeature
Change-Id: I04bd1e1c2c325baabfb836bd8cca60c5f59344c7
Signed-off-by: Markos Chandras <mchandras@suse.de>
|
|
The IDF file contains the netmask for every network so we should use
that information instead of using hardcoded values.
Change-Id: Ie798cb49563bdb72fdfb7b6e9e269692bf1f7bc9
Signed-off-by: Markos Chandras <mchandras@suse.de>
|
|
The PDF and IDF files contain all the information we need for the
virtual XCI deployment, so we can use it to create a dynamic inventory
and get rid of all the static ones which could easily get outdated
as PDF and IDF files evolve over time.
This inital version of the dynamic inventory contains a lot of
unnecessary generated information but we do that in order to ease
the migration from static files to the dynamic inventory. The dynamic
inventory will be improved in the future as we consume more and more
information from the PDF and IDF files.
Change-Id: Id9f07a61c67a5cffcbc18079a341e5d395020a27
Signed-off-by: Markos Chandras <mchandras@suse.de>
|
|
We split the networking task on distro specific files to make it
easier to read. Moreover, the debian network configuration has been
improved by simply sharing a common file across all nodes and also
use the 'source' facility in the main /etc/network/interfaces file
to use one configuration file per interface.
Change-Id: Ic822fe6dc197227e70c0ba7cee812629df287d82
Signed-off-by: Markos Chandras <mchandras@suse.de>
|
|
|
|
Some of the files needed by Functest are pretty big and it takes time
to download them all so this change ensures that the files are needed
by healthcheck is downloaded and the rest is not.
Further changes are needed to make the list even smaller for smoke test
but we need Functest guidance to identify which of them are needed for
what testing. It could be done by having suite specific download_images.sh
like download_images_healthcheck.sh, download_images_smoke.sh and so on.
deploy-scenario:os-nosdn-nofeature
installer-type:osa
Change-Id: Ib2c5867adfad8097d1a084c39ad08cc4df2e4549
Signed-off-by: Fatih Degirmenci <fdegir@gmail.com>
|
|
This change updates prepare-functest role for testing k8s scenarios
using functest healthcheck. The changes include
- update tasks to skip checking/creation of public gateway which
is needed for OpenStack based scenarios
- update run-functest.sh.j2 template and set the used docker image
name based on FUNCTEST_SUITE_NAME that is going to be used
- update run-functest.sh.j2 template and add commands needed to run
tests using functest-kubernetes-${FUNCTEST_SUITE_NAME} docker image
- update env.j2 to exclude setting the var EXTERNAL_NETWORK which is needed
for OpenStack based scenarios
Apart from updating the the prepare-functest role, a bug has also been fixed
by adding the fetching of xci.env for installer kubespray.
installer-type:kubespray
deploy-scenario:k8-nosdn-nofeature
Change-Id: Ia701db9748ea9509a2dc165341285fb189aa7266
Signed-off-by: Fatih Degirmenci <fdegir@gmail.com>
|
|
CI_LOOP, NODE_NAME, and BUILD_TAG are needed for logging info to console.
FUNCTEST_MODE and FUNCTEST_SUITE_NAME are important for stating what level
of testing we do for verify and merge jobs.
Change-Id: Iaa5499155b4b94a1cfc6b5c70fe6f8f7417502a6
Signed-off-by: Fatih Degirmenci <fdegir@gmail.com>
|
|
This reverts commit 5dc7b76e38019c059cea159769cdb2c37af98ded.
OPENSTACK_OSA_VERSION was removed by this patch and that variable is
needed. By looking at the patch, it seems theat variable was removed
by mistake
Change-Id: I73dc7a7ec393231717f847ff303f9b2f99a00cc0
|
|
The jinja templates that are used for networking setup are based on the
openstack-ansible needs, those needs can differ for another installers.
This change propose to make the network configuration depending on the
installer.
Signed-off-by: Victor Morales <victor.morales@intel.com>
Change-Id: Ie805c3c7716393377d4dfcb32ed794cc1039d515
|
|
When we run XCI for the first time, Ansible picks the first active
interface as the default one. However, after we configure all the XCI
bridges etc, and we try to run this role again, Ansible may have changed
its mind about what interface is active and it could default to one of
the bridges. This forces the role to redo the network configuration but
this time the bridges are being attached to bridges so everything goes
terribly wrong after that. The way to solve this would be to add a local
fact about what interface should be considered as the 'real' default one
so subsequent calls to this role to not destroy the network.
This also drops the task which removed the network configuration files
on SUSE platforms since Ansible is smart enough to not touch them if
they are configured properly.
Change-Id: Ic0525e934b1934a40d69e6cf977615ab9b3dac6d
Signed-off-by: Markos Chandras <mchandras@suse.de>
|