Age | Commit message (Collapse) | Author | Files | Lines |
|
1. fix error that kube_require_packages is undefined
2. add "remote_user: root" in configure-kubenet.yml
if don't add this, it will have issue to connect host.
fatal: [opnfv]:UNREACHABLE! => {"changed": false, "msg":
"Failed to connect to thehost via ssh: Permission denied (publickey,password).\r\n",
"unreachable": true"}
installer-type:kubespray
deploy-scenario:k8-nosdn-nofeature
Change-Id: Ia8d1980ad18375c0cff3a97b284b0f53d7539e23
Signed-off-by: wutianwei <wutianwei1@huawei.com>
|
|
Now that the scenario role is recorded as a local fact, we can
include the role directly directly so we don't need the the
intermediate file anymore.
deploy-scenario:os-nosdn-nofeature
installer-type:osa
Change-Id: Ia3c5658826f115538b2a103d987ee8f33d3048b9
Signed-off-by: Markos Chandras <mchandras@suse.de>
|
|
|
|
The SSH keys for the OPNFV host have been configured in the
configure-opnfvhost.yml playbook so we shouldn't do that in a playbook
that is only meant to configure the target hosts. As such, fix the group
to use 'k8s-cluster' instead.
Since the targethosts playbook does not apply to all hosts anymore, we
can simply drop the list of required packages and only install 'netaddr'
on the OPNFV host which is the host that needs it. Similarly, the dbus
package is only needed on the targethosts.
Change-Id: I293ad83a3a95797d9025f2cddd7849be7b3a49da
Signed-off-by: Markos Chandras <mchandras@suse.de>
|
|
Move default k8s-cluster.yml from kubespray/files/ to
role/k8-nosdn-nofeature/files/k8s-cluster.yml since it's scenario
specific. Moreover, we set 'cloud' as kube_network_plugin, which would
use kubnet as network plugin. The kubenet network plugin requires
routing between to be setup by the administrator so we need to add
static routes on every host since they are connected using a bridge
instead of a router.
installer-type:kubespray
deploy-scenario:k8-nosdn-nofeature
Change-Id: I6ab7288c966d7f17e9d61279056f7673be37bebe
Signed-off-by: wutianwei <wutianwei1@huawei.com>
Signed-off-by: Markos Chandras <mchandras@suse.de>
|
|
add the k8-nosdn-noeature and k8-canal-nofeature roles under scenarios directory
run different roles to configure the k8s-cluster according to the deploy scenario
installer-type:kubespray
deploy-scenario:k8-canal-nofeature
Change-Id: Ia96b01f79fb058e045c5b7d9d9aecb7f15a21e63
Signed-off-by: wutianwei <wutianwei1@huawei.com>
|
|
This change updates prepare-functest role for testing k8s scenarios
using functest healthcheck. The changes include
- update tasks to skip checking/creation of public gateway which
is needed for OpenStack based scenarios
- update run-functest.sh.j2 template and set the used docker image
name based on FUNCTEST_SUITE_NAME that is going to be used
- update run-functest.sh.j2 template and add commands needed to run
tests using functest-kubernetes-${FUNCTEST_SUITE_NAME} docker image
- update env.j2 to exclude setting the var EXTERNAL_NETWORK which is needed
for OpenStack based scenarios
Apart from updating the the prepare-functest role, a bug has also been fixed
by adding the fetching of xci.env for installer kubespray.
installer-type:kubespray
deploy-scenario:k8-nosdn-nofeature
Change-Id: Ia701db9748ea9509a2dc165341285fb189aa7266
Signed-off-by: Fatih Degirmenci <fdegir@gmail.com>
|
|
In the OpenStack-Ansible installers we are using the XCI ssl
certificates for the endpoints but in kubespray we are generating them
on the fly. In order to keep both setups as close as possible, we can
use the XCI certificates in kubespray as well.
Change-Id: I1ca55127fe747618205394c02b3d44bb573435f4
Signed-off-by: Markos Chandras <mchandras@suse.de>
|
|
Drop the kubespray specific tasks for managing the SSH keys in favor of
the common ones.
Change-Id: Ib8e18fcc14c4c0126cae72740dbb33921a21af6b
Signed-off-by: Markos Chandras <mchandras@suse.de>
|
|
(this commit fixes many things because they all need to be submitted
together to unblock the jobs)
Commit 9e1d3d6e62abf5d0da26a296bcd235f37a54d9c6 ("xci: playbooks: Fixes
various ansible-lint warnings") broke public key authentication from
localhost to the OPNFV host because the localhost pubkey was not
appended in the authorized_keys file. The reason for that was that the
task was skipped due to the 'creates' parameter. This is now fixed, by
dropping the check since we always need to append the localhost pubkey.
This is only a temporary solution until we modify kubespray to use the
common file for managing the SSH keys.
This also makes the final 'kubectl' move to /usr/local/bin non-fatal
since future kubespray releases put it there already.
The same commit also broke the k8s-cluster.yml overrides. This is
because the file was never copied across due to the task conditional
being wrong. As such, we fix the conditional to check for the correct
file.
Change-Id: I9cfb29eba50c7fea9df29581ebb015163b8a9754
Signed-off-by: Markos Chandras <mchandras@suse.de>
|
|
In preparation for adding support for the 'ansible-lint' tool we fix
various problems in our playbooks to make the tool happy before we make
it mandatory.
Some of the problems that are fixed here are
- [ANSIBLE0011] All tasks should be named
- [ANSIBLE0012] Commands should not change things if nothing needs doing
- [ANSIBLE0013] Use shell only when shell functionality is required
- [ANSIBLE0010] Package installs should not use latest
installer-type:osa
deploy-scenario:os-nosdn-nofeature
Change-Id: I66c759d3932a414b81b2846393d2d98ce80c0b6d
Signed-off-by: Markos Chandras <mchandras@suse.de>
|
|
This change removes the variables that are not used in any of the
playbooks/roles from opnfv ansible vars.
Apart from that, all caps ansible vars replaced with lowercase ones
and impacted playbooks/roles are updated.
installer-type:osa
deploy-scenario:os-nosdn-nofeature
Change-Id: I99ebdc155b3903176ac5940b64cef0c0f3aa0f0d
Signed-off-by: Fatih Degirmenci <fdegir@gmail.com>
|
|
1. Add the type: NodePort in dashboard service. the default is ClustIP,
which cannot access from outside.
2. Print the url ,user, password for user to access dashboard.
3. configure the kubectl CLI in opnfv host.
Change-Id: I6cb6e6f7547412139ece0c40a85de67a9edce0ef
Signed-off-by: wutianwei <wutianwei1@huawei.com>
|
|
Change-Id: Ie196d1df537d09f0f91e43ab5e0305a45d543815
Signed-off-by: Fatih Degirmenci <fdegir@gmail.com>
|
|
Kubespray already supports the CentOS distribution so make the
necessary changes to allow it to work in XCI.
Change-Id: I3cf1db055a5fd563b107b46456bc3e18eeafb3ab
Co-authored-by: Markos Chandras <mchandras@suse.de>
Signed-off-by: wutianwei <wutianwei1@huawei.com>
|
|
Hardcoding the interface as a variable is very fragile since it varies
from host to host. We could use the Ansible facts to find out the
interface name and then use that to configure all the VLANs and
networking.
Change-Id: Ie7e2409d638625b9bede23b6c1fe33dc36f81840
Signed-off-by: Markos Chandras <mchandras@suse.de>
|
|
This commit introduces kubespray into XCI.
k8s install currently assumes k8s install
and OpenStack install cannot coexist.
If XCI_INSTALLER is set to "kubespray" and
DEPLOY_SCENARIO is set to "k8-nosdn-nofeature"
the xci-deploy.sh would install kubernetes instead of OpenStack.
The version of kubernetes is beta release v1.9.0 currently
according to the master of kubespray
it only support the ubuntu now.
Opensuse and centos still need to develop and test.
This patch create the directory xci/installer/kubespray,
the related files of kubespray would be placed to it.
The xci/installer/$installer/playbooks/configure-localhost.yml was moved
to xci/playbooks/configure-localhost.yml as a common yaml file.
You can modify some parameters according your need
in xci/installer/kubespray/files/k8s-cluster.yml to deploy cluster.
When deploying kubernetes,
it would download the kubespray to releng-xci/.cache/repos/kubespray.
If your flavor is Ha, it will download haproxy_server and keepalived
to xci/playbook/roles, which setup haproxy service for kubernetes.
Change-Id: I24d521a735d7ee85fbe5af8c4def65f37586b843
Signed-off-by: wutianwei <wutianwei1@huawei.com>
|