Age | Commit message (Collapse) | Author | Files | Lines |
|
We are configuring static IPs in the various nodes but we don't do
anything for DNS assuming that DNS is being configured by another
entity. However, the IDF file already contains DNS information for us
so we should use that instead. Moreover, we update the IDF file to use
the gateway as DNS instead of the Google one in order to make it more
usable on restricted networks.
Change-Id: Ieba58ec9558080a1296e204c4f99bae859e9daef
Signed-off-by: Markos Chandras <mchandras@suse.de>
|
|
The kubespray installer contains one inventory per flavor. We can get
rid of these files and use the dynamic inventory similar to OSA.
Moreover, we extend the dynamic inventory to read additional group
variables per flavor if necessary. This way we can still pass additional
information to inventory on per-flavor basis. This also fixes a typo
in the 'IDF' file. We also need to bump Ansible for kubespray since the
version we were using is having troubles with dynamic inventories.
Change-Id: Ic58101555f81aec5fee3c193608440aa89bbe445
Signed-off-by: Markos Chandras <mchandras@suse.de>
|
|
Each installer has its own Ansible groups so we need record such
information separately. Moreover, we need to add 'flavor' information
to the IDF so we know which hosts belong to what flavor. This also
fixes the kubernetes installer type to be 'kubespray' instead of 'k8s'
Finally, we extend the IDF to also set appropriate hostnames for the
nodes.
Change-Id: I52b20908ad927840e0b38fba96be8faf6da2b52d
Signed-off-by: Markos Chandras <mchandras@suse.de>
|
|
this is a proposition of self sufficient PDF/IDF to describe the POD
where XCI is running.
The PDF [Pod Description File] is describing the physical
level of the POD where XCI will run the installer. It lists servers and
their description (CPU/RAM/DISK/NICS)
The IDF [Installer Description File] is describing how the installers
will use the POD. 2 sections are today important in this IDF:
- idf.net_config is describing the network topology
- xci section is set to describe how common steps (network, nfs,
ceph,...) of XCI will use the pod.
Another section of IDF idf.[installer], curretnly empty, will
contain all pod specificities that are linked to an installer (osa,
kolla, k8s,...) and not shared with the others.
Those 2 files are describing the vitual pod as it is already
deployed by the XCI. Those default files can be replaced by the ones
describing the target pod (done manually or with the CI). It would then
be to the install process to take into account these files (to be done).
Change-Id: I3dcbd965f8c84b03d34eb0fd68599d7bec402dbd
Signed-off-by: Blaisonneau David <david.blaisonneau@orange.com>
|
|
This change removes the variables that are not used in any of the
playbooks/roles from opnfv ansible vars.
Apart from that, all caps ansible vars replaced with lowercase ones
and impacted playbooks/roles are updated.
installer-type:osa
deploy-scenario:os-nosdn-nofeature
Change-Id: I99ebdc155b3903176ac5940b64cef0c0f3aa0f0d
Signed-off-by: Fatih Degirmenci <fdegir@gmail.com>
|
|
|
|
Change-Id: I330bc036f901d4ba61bc94ee6e085cadf54b4d8b
Signed-off-by: Fatih Degirmenci <fdegir@gmail.com>
|
|
The upstream pw-token-gen tool doesn't need python-crypto anymore since
e9f957861b4160640f6debb2b939084ec43b43b2 ("Make pw-token-gen.py more
random") so we no longer need to install that package.
Change-Id: Ib53f246db999ff8ecfed2e3f62143c780c483fbd
Signed-off-by: Markos Chandras <mchandras@suse.de>
|
|
The docker packages that we install in the OPNFV VM are needed by
functest so add them to the related role.
Change-Id: I6ebe76fd030859f757d41ecf20c30ab76888ee9c
Signed-off-by: Markos Chandras <mchandras@suse.de>
|
|
Ironic and Horizon are not quite needed for a functional deployment
and they are not currently required by functest so we can remove them
from the default deployment.
Change-Id: I171483f7b774951f84687529e98cb519afa48043
Signed-off-by: Markos Chandras <mchandras@suse.de>
|
|
- Integrate XCI with outband od-odl-bgpvpn role
- Install python-neutronclient on opnfv vm for the
openstack bgpvpn specific cli commands
Change-Id: Ib737349e2b2429bd366881f1e3657daf8c5c30ac
Signed-off-by: Periyasamy Palanisamy <periyasamy.palanisamy@ericsson.com>
|
|
Change-Id: I1d58f55a1bda258cc3afbfb81e2dd5a1c8e792a1
Signed-off-by: Fatih Degirmenci <fdegir@gmail.com>
|
|
Hardcoding the interface as a variable is very fragile since it varies
from host to host. We could use the Ansible facts to find out the
interface name and then use that to configure all the VLANs and
networking.
Change-Id: Ie7e2409d638625b9bede23b6c1fe33dc36f81840
Signed-off-by: Markos Chandras <mchandras@suse.de>
|
|
This commit introduces kubespray into XCI.
k8s install currently assumes k8s install
and OpenStack install cannot coexist.
If XCI_INSTALLER is set to "kubespray" and
DEPLOY_SCENARIO is set to "k8-nosdn-nofeature"
the xci-deploy.sh would install kubernetes instead of OpenStack.
The version of kubernetes is beta release v1.9.0 currently
according to the master of kubespray
it only support the ubuntu now.
Opensuse and centos still need to develop and test.
This patch create the directory xci/installer/kubespray,
the related files of kubespray would be placed to it.
The xci/installer/$installer/playbooks/configure-localhost.yml was moved
to xci/playbooks/configure-localhost.yml as a common yaml file.
You can modify some parameters according your need
in xci/installer/kubespray/files/k8s-cluster.yml to deploy cluster.
When deploying kubernetes,
it would download the kubespray to releng-xci/.cache/repos/kubespray.
If your flavor is Ha, it will download haproxy_server and keepalived
to xci/playbook/roles, which setup haproxy service for kubernetes.
Change-Id: I24d521a735d7ee85fbe5af8c4def65f37586b843
Signed-off-by: wutianwei <wutianwei1@huawei.com>
|
|
Change-Id: If8c0de44c313fdc22b1c7443b12d42769035c5b0
Signed-off-by: Tapio Tallgren <tapio.tallgren@nokia.com>
|
|
Using 'installer' to describe the tool that will deploy the foundations
of a particular XCI scenario is more appropriate than NFVI which
normally describes both the physical and virtual resources needed by
an NFV deployment.
Change-Id: Ib8b1aac58673bf705ce2ff053574fd10cb390d71
Signed-off-by: Markos Chandras <mchandras@suse.de>
|
|
Introduce a new XCI_DISTRO variable to select the distribution to deploy
on the VMs in order to make deployments more flexible and decouple the
VM OS selection from the host one. The default value for this new
variable is to match the host OS but users can always set it to one
of the supported distributions. We can now simply execute the
install-ansible.sh script instead of sourcing it in order to keep
the environment as clean as possible.
Change-Id: Ia74eb0422f983848cde0fb7b220ea1035dfa78bc
Signed-off-by: Markos Chandras <mchandras@suse.de>
|
|
we maybe introduce other NFVI in the future in XCI.
it is necessary to put the nfvi files to corresponding directory xci/nfvi/$NFVI/files,
otherwise the files directory will be confused.
Change-Id: Iea98167ff0bc8d338a94fe1c064ac0ab396c53d3
Signed-off-by: wutianwei <wutianwei1@huawei.com>
|
|
Rest of the OPNFV projects use the variable DEPLOY_SCENARIO so
XCI should be aligned with them as well even though OPNFV_SCENARIO
fits better than DEPLOY_SCENARIO.
Change-Id: Id48c41fa8a1fa9493cfc7a4906f64b6d8ed27d64
Signed-off-by: Fatih Degirmenci <fdegir@gmail.com>
|
|
The OPENSTACK_OSA_PATH only makes sense on localhost. As such, when we
use it on playbooks that operate on remote hosts, the result is not
predictable. However, we rsync the entire releng-xci repository to the
opfnv host so we can make everything predictable by simply clone
everything in advance in the .cache directory. That directory is then
rsync'd to the opnfv host. As such, we can repurpose the
OPENSTACK_OSA_PATH to point to the path into the OPNFV host. Moreover,
all external repositories are being cloned to .cache/repos so we can
eliminate some variables in order to simplify the code. Finally, we
bring back the ability to use an external OSA repository for
development purposes.
Change-Id: Ieef3e22ae2085f6735185634d555cfc0d4b69b39
Signed-off-by: Markos Chandras <mchandras@suse.de>
|
|
Previously, we used to clone the releng-xci repository under a directory
in /tmp, copy our changes to that repository and then run the
xci-deploy.sh script from it. However, this made things far too complex
for deployers and developers since some playbooks were used from the
local repo whereas others were used from teh /tmp checkout. By running
everything from our local repository simplifies things a lot since we
can directly test our changes and also reduces the code we have in our
playbooks.
Change-Id: If16aa51b2846c170676df82d25cb90e26b1568b2
Signed-off-by: Markos Chandras <mchandras@suse.de>
|
|
All scenarios are being cloned to XCI_SCENARIOS_CACHE so look
there for the various override files. This will allow external
scenarios to influence the XCI environment.
Change-Id: I39a48ce55baaa29d09737ce6232867ef1165f099
Signed-off-by: Markos Chandras <mchandras@suse.de>
|
|
|
|
We are installing the docker package but not checking if it is started
The service name for the three distros is the same but I still added the
variable in each distro variables file to keep best practices
Change-Id: I0c73069ea7edc366e824cf39d14d24d1416fd6c3
Signed-off-by: Manuel Buil <mbuil@suse.com>
|
|
The OPNFV_RELENG_DEV_PATH variable was used to point to a releng-xci
development repository. However, people normally set the current
directory as the development one and they almost always want to
test the current code in XCI. Using an secondary releng-xci tree
as development repo is a very obscure case and it normally complicates
things. As such, let drop this option and always use the current
repository for development purposes.
Change-Id: If111bf29a32a5f6ea28694f191645af0c6a87abc
Signed-off-by: Markos Chandras <mchandras@suse.de>
|
|
Change-Id: I9253edf028fce571e04f9f82103a94952e05d2d4
Signed-off-by: Periyasamy Palanisamy <periyasamy.palanisamy@ericsson.com>
|
|
The RUN_TEMPEST global environment has been defined in *user-vars*
but never used in the playbooks. This change pretends to enable the
use of that value.
Change-Id: I49ca092546494c0cdcb015a549828bf79fa5f889
Signed-off-by: Victor Morales <victor.morales@intel.com>
|
|
Openstack ansible support to deploy ceph.
The purpose of this patch is to configure the ceph,
just like we configure other openstack components.
The default is to not deploy ceph.
If you want to deploy ceph you just need to
export XCI_CEPH_ENABLED=true before running xci-deploy.sh.
When deployed successfully, the openstack storage will use ceph.
Change-Id: Ifd8d16fdce2914b6316842e72bbfd93228ea059d
Signed-off-by: wutianwei <wutianwei1@huawei.com>
|
|
docker-py package is required by the docker_container module in ansible:
http://docs.ansible.com/ansible/latest/docker_container_module.html#docker-container
Change-Id: Ib051ae09c84cfa973ef814852e78626499471d0f
Signed-off-by: Manuel Buil <mbuil@suse.com>
|
|
Docker is needed for running tests against the deployment.
Shade is needed for managing OpenStack via Ansible.
This change adds tasks to install docker and shade on opnfv
host if it is run as part of CI.
Users should be free to install these if they want so it is
not installed for them by default.
Change-Id: Idfd0f02312cc5e1b0180ed2408755a8c730b987b
Signed-off-by: Fatih Degirmenci <fatih.degirmenci@ericsson.com>
|
|
Change-Id: I42c6f5f07ac87b5599758947fabe5fce36d44a2e
Signed-off-by: Markos Chandras <mchandras@suse.de>
|
|
Instead of making OSA to generate self signed certs, bring our
own and pass them.
By this way we will be able to trust in that certs, and start
consuming OpenStack easily.
It will also generate proper openrc file to source it and start
consuming the cloud properly.
Change-Id: Ic72a8b05e6efb222926fc5fa0800e033b2dbd22f
Closes-Bug: RELENG-266
Signed-off-by: Yolanda Robla <yroblamo@redhat.com>
|
|
These packages are needed by pw-token-gen.py tool
Change-Id: Ib9d165274449551a469e201da9feeffac5a7a4cf
Signed-off-by: Juan Vidal Allende <juan.vidal.allende@ericsson.com>
|
|
This will allow to define the XCI_EXTRA_VARS_PATH, that can
contain group_vars/all (or any other valid files), and those
will be copied inside releng and bifrost playbooks.
Change-Id: I95e4b0bfb67f26bfa1eb10c97096784eb7f3a87a
Signed-Off-By: Yolanda Robla <yroblamo@redhat.com>
|
|
Putting the host keys in '/' requires root privileges so
it's best if we place them in the same directory like the
rest of the XCI files.
Change-Id: I030ed3d6cbb57bb984a78aeffb4eca2bd5c10bb0
Signed-off-by: Markos Chandras <mchandras@suse.de>
|
|
When developing XCI features it's useful to be able to use the local
repositories rather than cloning them from git since that makes
it harder to test local modifications against XCI. As such, we add
three new variables which can be used to hold local paths to the
bifrost, releng and openstack-ansible repositories. We are still
cloning the repositories but we then use the 'synchronize' Ansible
module to copy modified files from the local repositories.
Change-Id: I6d593ea48d8b9c51415d9d0848f77a498ef2f486
Signed-off-by: Markos Chandras <mchandras@suse.de>
|
|
XCI has different jobs/loops to run
- patchset verification jobs (currently bifrost and osa in future)
- periodic jobs (bifrost and osa)
- daily jobs (for OPNFV platform deployment and testing)
The same scripts/playbooks used by XCI will also be used by developers.
We need to do different things depending on the context the scripts
and playbooks are executed.
- periodic jobs will use latest of everything to find working versions
of the components. (periodic osa will use unpinned role requirements
for example)
- daily jobs will use pinned versions in order to bring up the platform
and run OPNFV testing against it. (daily deployment will use pinned
versions and role requirements for example)
- developers might choose to use pinned versions or latest
Depending on what loop we are running, we need to do things differently
in scripts and playbooks. This variable will help us to do this in easy way.
We can of course do pattern matching of the job name but it will not
work if the scripts are used outside of Jenkins.
The default loop for non-Jenkins execution is set to daily as we want
developers to use working versions unless they change it to something
else intentionally.
Change-Id: Iff69c77ae3d9db2c14de1783ce098da9e9f0c83d
Signed-off-by: Fatih Degirmenci <fatih.degirmenci@ericsson.com>
|
|
This change moves preparation (cloning repos, combining opnfv/bifrost
with openstack/bifrost), destroying and creating VM nodes from the
script into a separate playbook.
This requires the host to have ansible installed. The version of ansible
to install using pip currently matches to what bifrost uses but it is
hardcoded and needs to be fixed properly.
The reason for having this as a playbook to simplify the script and
increase the reuse. This playbook will be used for
- developer sandbox
- periodic bifrost jobs to run against latest on given branch and
promoting bifrost sha1 to pin later on
- daily jobs to run using pinned versions of bifrost
Change-Id: I033f12290dfea19d4c74be80eea7203211c0369e
Signed-off-by: Fatih Degirmenci <fatih.degirmenci@ericsson.com>
|
|
Network configuration task and accompanied handler put into a role
and handler is converted to task.
Distro specific var, interface, is introduced to ensure we do not
hardcode the interface which might not be available.
Update the templates accordingly.
Change-Id: I667620fe22c93a9b20a1d8c1b7b0051d7647b591
Signed-off-by: Fatih Degirmenci <fatih.degirmenci@ericsson.com>
|
|
OS family vars_files are currently empty and put there to show
the intend. (kind of TODO)
opnfv.yml holds non-distro and non-flavor specific variables.
Change-Id: I65aff2650257f2df00fd1f0a0638fd1aff596ac4
Signed-off-by: Fatih Degirmenci <fatih.degirmenci@ericsson.com>
|
|
This patch adds the main/common playbooks, files, and templates to
be used for all flavors.
The provisioning and OpenStack installation process will be as below
- provision VMs for flavor using bifrost
- once the VMs are provisioned, configure-localhost.yml playbook will
be run, preparing the localhost in order to ensure the right playbooks
(configure-opnfvhost.yml and configure-targethosts.yml), inventory files
and var files are in place before we proceed with configuring opnfv host.
- after getting the right files for the flavor, opnfv host will be
configured using configure-opnfvhost.yml playbook.
- finally, the target hosts will be configured.
- once the above process is completed, openstack-ansible playbooks will
be run, setting up hosts, infrastructure and OpenStack.
Change-Id: I6e08b2cfdab9627f765e6fc414917b09f953cab2
Signed-off-by: Fatih Degirmenci <fatih.degirmenci@ericsson.com>
|
|
Summary of changes are
- flavors directory has been removed and the flavor config files are
moved into config and renamed to <flavor>-vars
- common files are put under file
- files specific to flavors are put under file/<flavor> directories
- templates and var files are stored in template and var directories
respectively
- 3 playbooks are created
Change-Id: I8a93e0947ccb02f93a6c8f00da27e0cc6b4dc21e
Signed-off-by: Fatih Degirmenci <fatih.degirmenci@ericsson.com>
|