summaryrefslogtreecommitdiffstats
path: root/docs
diff options
context:
space:
mode:
Diffstat (limited to 'docs')
-rw-r--r--docs/conf.py1
-rw-r--r--docs/conf.yaml3
-rw-r--r--docs/images/arch-layout-k8s-ha.pngbin0 -> 56989 bytes
-rw-r--r--docs/images/arch-layout-k8s-noha.pngbin0 -> 268126 bytes
-rw-r--r--docs/requirements.txt3
-rw-r--r--docs/specs/infra_manager.rst130
-rw-r--r--docs/specs/k8-calico-onap.rst141
-rw-r--r--docs/specs/k8-odl-coe.rst105
-rw-r--r--docs/xci-criterias-cls.rst74
-rw-r--r--docs/xci-overview.rst2
-rw-r--r--docs/xci-user-guide.rst147
11 files changed, 560 insertions, 46 deletions
diff --git a/docs/conf.py b/docs/conf.py
new file mode 100644
index 00000000..86ab8c57
--- /dev/null
+++ b/docs/conf.py
@@ -0,0 +1 @@
+from docs_conf.conf import * # flake8: noqa
diff --git a/docs/conf.yaml b/docs/conf.yaml
new file mode 100644
index 00000000..305b679e
--- /dev/null
+++ b/docs/conf.yaml
@@ -0,0 +1,3 @@
+---
+project_cfg: opnfv
+project: releng-xci
diff --git a/docs/images/arch-layout-k8s-ha.png b/docs/images/arch-layout-k8s-ha.png
new file mode 100644
index 00000000..e0870305
--- /dev/null
+++ b/docs/images/arch-layout-k8s-ha.png
Binary files differ
diff --git a/docs/images/arch-layout-k8s-noha.png b/docs/images/arch-layout-k8s-noha.png
new file mode 100644
index 00000000..0ee8bceb
--- /dev/null
+++ b/docs/images/arch-layout-k8s-noha.png
Binary files differ
diff --git a/docs/requirements.txt b/docs/requirements.txt
new file mode 100644
index 00000000..f26b0414
--- /dev/null
+++ b/docs/requirements.txt
@@ -0,0 +1,3 @@
+lfdocs-conf
+sphinxcontrib-httpdomain
+sphinx-opnfv-theme
diff --git a/docs/specs/infra_manager.rst b/docs/specs/infra_manager.rst
new file mode 100644
index 00000000..a8ecb548
--- /dev/null
+++ b/docs/specs/infra_manager.rst
@@ -0,0 +1,130 @@
+PDF and IDF support in XCI
+###########################
+:date: 2018-04-30
+
+This spec introduces the work required to adapt XCI to use PDF and IDF which
+will be used for virtual and baremetal deployments
+
+Definition of Terms
+===================
+* Baremetal deployment: Deployment on physical servers as opposed to deploying
+software on virtual machines or containers running in the same physical server
+
+* Virtual deployment: Deployment on virtual machines, i.e. the servers where
+nodes will be deployed are virtualized. For example, in OpenStack, computes and
+controllers will be virtual machines. This deployment is normally done on just
+one physical server
+
+* PDF: It stands for POD Descriptor File, which is a document that lists the
+hardware characteristics of a set of physical or virtual machines which form
+the infrastructure. Example:
+
+https://git.opnfv.org/pharos/tree/config/pdf/pod1.yaml
+
+* IDF: It stands for Installer Descriptor File, which is a document that
+includes useful information for the installers to accomplish the baremetal
+deployment. Example:
+
+https://git.opnfv.org/fuel/tree/mcp/config/labs/local/idf-pod1.yaml
+
+Problem description
+===================
+
+Currently, XCI only supports virtualized deployments running in one server. This
+is good when the user has limited resources, however, baremetal is the preferred
+way to deploy NFV platforms in lab or production environments. Besides, this
+limits the scope of the testing greatly because we cannot test NFV hardware
+specific features such as SRIOV.
+
+Proposed change
+===============
+
+Introduce the infra_manager tool which will prepare the infrastructure for XCI
+to drive the deployment in a set of virtual or baremetal nodes. This tool will
+execute two tasks:
+
+1 - Creation of virtual nodes or initialization of the preparations for
+baremetal nodes
+2 - OS provisioning on nodes, both virtual or baremetal
+
+Once those steps are ready, XCI will continue with the deployment of the
+scenario on the provisioned nodes.
+
+The infra_manager tool will consume the PDF and IDF files describing the
+infrastructure as input. It will then use a <yet-to-be-created-tool> to do
+step 1 and bifrost to boot the Operating System in the nodes.
+
+Among other services Bifrost uses:
+- Disk image builder (dib) to generate the OS images
+- dnsmasq as the DHCP server which will provide the pxe boot mechanism
+- ipmitool to manage the servers
+
+Bifrost will be deployed inside a VM in the jumphost.
+
+For the time being, we will create the infrastructure based on the defined XCI
+flavors, however, the implementation should not hinder the possibility of
+having one pdf and idf per scenario, defining the characteristics and the
+number of nodes to be deployed.
+
+Code impact
+-----------
+
+The new code will be introduced in a new directory called infra_manager under
+releng-xci/xci/prototypes
+
+Tentative User guide
+--------------------
+
+Assuming the user cloned releng-xci in the jumphost, the following should be
+done:
+
+1 - Move the idf and pdf files which describe the infrastructure to
+releng-xci/xci/prototypes/infra_manager/var. There is an example under xci/var
+
+2 - Export the XCI_FLAVOR variable (e.g. export XCI_FLAVOR=noha)
+
+3 - Run the <yet-to-be-created-tool> to create the virtual nodes based on the
+provided PDF information (cpu, ram, disk...) or initialize the preparations for
+baremetal nodes
+
+4 - Start the bifrost process to boot the nodes
+
+5 - Run the VIM deployer script:
+releng-xci/xci/installer/$inst/deploy.sh
+
+where $inst = {osa, kubespray, kolla}
+
+In case of problems, the best way to debug is accessing the bifrost vm and use:
+
+* bifrost-utils
+* ipmitool
+* check the DHCP messages in /var/log/syslog
+
+
+Implementation
+==============
+
+Assignee(s)
+-----------
+
+Primary assignee:
+ Manuel Buil (mbuil)
+ Jack Morgan (jmorgan1)
+ Somebody_else_please (niceperson)
+
+Work items
+----------
+
+1. Provide support for a dynamically generated inventory based on PDF and IDF.
+This mechanism could be used for both baremetal and virtual deployments.
+
+2. Contribute the servers-prepare.sh script
+
+3. Contribute the nodes-deploy.sh script
+
+4. Integrate the three previous components correctly
+
+5. Provide support for the XCI supported operating systems (opensuse, Ubuntu,
+centos)
+
+6. Allow pdf and idf per scenario
diff --git a/docs/specs/k8-calico-onap.rst b/docs/specs/k8-calico-onap.rst
new file mode 100644
index 00000000..445e5c71
--- /dev/null
+++ b/docs/specs/k8-calico-onap.rst
@@ -0,0 +1,141 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. SPDX-License-Identifier: CC-BY-4.0
+.. Copyright 2018 Intel Corporation
+
+.. Links
+.. _Open Networking Automation Platform: https://www.onap.org/
+.. _ONAP metric analysis: https://onap.biterg.io/
+.. _ONAP on Kubernetes: http://onap.readthedocs.io/en/latest/submodules/oom.git/docs/oom_quickstart_guide.html
+.. _Helm: https://docs.helm.sh/
+.. _ONAP on OpenStack: https://wiki.onap.org/display/DW/ONAP+Installation+in+Vanilla+OpenStack
+.. _OOM Minimum Hardware Configuration: http://onap.readthedocs.io/en/latest/submodules/oom.git/docs/oom_cloud_setup_guide.html#minimum-hardware-configuration
+.. _OOM Software Requirements: http://onap.readthedocs.io/en/latest/submodules/oom.git/docs/oom_cloud_setup_guide.html#software-requirements
+.. _seed code: https://gitlab.com/Orange-OpenSource/onap_oom_automatic_installation
+.. _Orange ONAP OOM Deployment Resource Requirements: https://gitlab.com/Orange-OpenSource/kubespray_automatic_installation/blob/521fa87b20fdf4643f30fc28e5d70bdf9f1c98f3/vars/pdf.yaml
+
+This spec introduces the work required to include the XCI scenario
+for `Open Networking Automation Platform`_ (ONAP) through the ONAP
+Operations Manager(OOM) tool. This tool provides the ability to manage
+the entire life-cycle of an ONAP installation on top of a Kubernetes
+deployment.
+
+Problem description
+===================
+According to the `ONAP metric analysis`_, more than 26K commit
+changes have been submited since its announcement. Every patchset
+that is merged raises a Jenkins Job for the creation and deployment
+of a Docker container image for the corresponding service. Those new
+images are consumed by deployment methods like `ONAP on Kubernetes`_
+and `ONAP on OpenStack`_) during the installation of ONAP services.
+
+Given that ONAP is constantly changing, an early issue detected can
+be crucial for ensuring the proper operation of OOM tool.
+
+Minimum Hardware Requirements
+=============================
+
+Initially, No HA flavor will be the only supported flavor in order to
+bring a reference implementation of the scenario. Support for other
+flavors will be introduced based on this implementation.
+
+According to the `OOM Minimum Hardware Configuration`_, ONAP requires
+large amount of resources, especially on Kubernetes Worker nodes.
+
+Given that No HA flavor has multiple worker nodes, the containers can
+be distributed between the nodes resulting in a smaller footprint of
+of resources.
+
+The No HA scenario consists of 1 Kubernetes master node and 2 Kubernetes
+Worker nodes. Total resource requirements should be calculated based on
+the number of nodes.
+
+This recommendation is work in progress and based on Orange
+implementation which can be seen from
+`Orange ONAP OOM Deployment Resource Requirements`_.
+The resource requirements are subject to change and the scenario will
+be updated as necessary.
+
+Hardware for Kubernetes Master Node(s)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+* RAM: 8GB
+* HD: 150GB
+* vCores: 8
+
+Hardware for Kubernetes Worker Node(s)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+* RAM: 64GB
+* HD: 80GB
+* vCores: 16
+
+Proposed change
+===============
+
+In order to guarantee the proper installation and validation of ONAP
+services, this spec proposes two phases that complements each other:
+
+1. Creation k8-calico-onap scenario for the installation of ONAP
+services. This new scenario will be designed to validate the
+installation process provided by OOM tool.
+2. Adding Integration tests for ensuring that ONAP is operating
+properly. This process should cover Design and Runtime phases.
+
+Code impact
+-----------
+New code will be created based on the existing k8-calico-nofeature
+scenario and will be placed in scenarios/k8-calico-onap directory
+in releng-xci-scenario repo. The ONAP installation should proceed
+once the VIM has been installed and before the OPNFV tests run.
+
+
+The default configuration for the virtual resources (4 vCores, 8GB RAM,
+and 100GB HD) offered by XCI does not satisfy the ONAP needs. The
+scenario override mechanism will be used to bring up nodes with
+the necessary amount of resources. This will be replaced by PDF and
+IDF once they become available. PDF and IDF implementation is a
+separate work item and it is not expected as dependency for the
+implementation of this scenario.
+
+Software Requirements
+---------------------
+
+OOM has gone through significant changes during Beijing release
+cycle. This resulted in changed way of installing ONAP.
+
+In its current release, new software is necessary to install ONAP
+as listed below and on `OOM Software Requirements`_..
+
+Helm: 2.8.x
+kubectl: 1.8.10
+
+The OOM also provides a Makefile that collects instructions for the
+creation of ONAP packages into the Tiller repository. To determine
+which ONAP services are going to be enabled, this configuration can
+be done by the OOM configuration, this new role will be placed in
+scenarios/k8-calico-onap/role/k8-calico-onap/tasks folder in
+releng-xci-scenario repository.
+
+Tentative User guide
+--------------------
+TBD
+
+Implementation
+==============
+The Orange team has been working on this scenario for a while, this
+new role can use and adapt their `seed code`_ during the implementation.
+
+Assignee(s)
+-----------
+
+Primary assignee:
+ Victor Morales (electrocucaracha)
+ Fatih Degirmenci (fdegir)
+ Jack Morgan (jmorgan1)
+
+Work items
+----------
+TBD
+
+Glossary
+--------
diff --git a/docs/specs/k8-odl-coe.rst b/docs/specs/k8-odl-coe.rst
new file mode 100644
index 00000000..cd29456c
--- /dev/null
+++ b/docs/specs/k8-odl-coe.rst
@@ -0,0 +1,105 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. SPDX-License-Identifier: CC-BY-4.0
+.. Copyright 2018 Ericsson AB and Others
+
+.. Links
+.. _OpenDaylight COE: https://wiki.opendaylight.org/view/COE:Main
+.. _setting-up-coe-dev-environment: https://github.com/opendaylight/coe/blob/master/docs/setting-up-coe-dev-environment.rst
+.. _ansible-opendaylight: https://git.opendaylight.org/gerrit/gitweb?p=integration/packaging/ansible-opendaylight.git;a=tree
+
+This spec proposes adding an k8-odl-coe XCI scenario for OpenDaylight as the
+networking provider for Kubernetes using the OpenDaylight COE (Container
+Orchestration Engine) and NetVirt projects.
+
+Problem Description
+===================
+
+Currently OpenDaylight's advanced networking capabilities are not leveraged
+with Kubernetes in any scenarios. This spec proposes a reference platform for
+deployments that want to use OpenDaylight as a networking backend for
+Kubernetes.
+
+Minimum Hardware Requirements
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Hardware for Kubernetes Master Node(s)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+* RAM: 16 GB (20 GB for ha flavor i.e. for OpenDaylight Clustering)
+* HD: 80 GB
+* vCores: 6
+
+Hardware for Kubernetes Worker Node(s)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+* RAM: 12 GB
+* HD: 80 GB
+* vCores: 6
+
+Supported XCI Sandbox Flavors
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+This scenario will support deployments on Mini, No HA and HA XCI Sandbox Flavors.
+
+Proposed Change
+===============
+
+1. Provide Pod Descriptor Files (PDF) and IDF (Installer Descriptor Files)
+ specific to this scenario to install Kubernetes with OpenDaylight COE.
+2. Introduce a new scenario k8-odl-coe in releng-xci-scenarios repository.
+3. Reuse the role from k8-nosdn-nofeature scenario to install Kubernetes.
+ It has kube_network_plugin option to 'cloud' in k8s-cluster.yml so that
+ Kubespray doesn't configure networking between pods. This enables
+ OpenDaylight to be chosen as a networking backend in steps 4-7.
+4. Enhance upstream `ansible-opendaylight`_ role to deploy OpenDaylight with
+ COE Watcher on k8s master node(s) and CNI plugin on the k8s master and
+ worker node(s).
+5. Add the required Ansible tasks in k8-odl-coe role to direct XCI and
+ ansible-opendaylight role to configure k8s with OpenDaylight as the
+ networking backend for pod connectivity.
+6. Run the Health Check by testing the pods' connectivity.
+
+The COE Watcher binary and COE CNI plugin are built from OpenDaylight COE
+source code. The user will have flexibility to choose its SHA from XCI's
+ansible-role-requirements.yml file.
+
+Code Impact
+-----------
+
+Code specific to the k8-odl-coe scenario will be added to the xci/scenarios
+directory of the releng-xci-scenarios repository.
+
+User Guide
+----------
+
+No user guide will be provided.
+
+Implementation
+==============
+
+See the Proposed Change section.
+
+Assignee(s)
+-----------
+
+Primary assignees:
+
+* Prem Sankar G (premsa)
+* Periyasamy Palanisamy (epalper)
+* Fatih Degirmenci (fdegir)
+
+Work Items
+----------
+
+1. Enhance the akka.conf.j2 in upstream ansible-opendaylight role to work
+ with k8s deployments (i.e. run ODL cluster on k8s master nodes).
+ Currently this works only for the deployments based on Openstack-Ansible.
+2. Enhance upstream ansible-opendaylight role to install odl-netvirt-coe and
+ odl-restconf Karaf features, build COE watcher and CNI plugin binaries
+ from source.
+3. Implement configure-kubenet.yml to choose OpenDaylight COE as the
+ networking backend.
+4. Implement Health Check tests.
+
+Glossary
+--------
diff --git a/docs/xci-criterias-cls.rst b/docs/xci-criterias-cls.rst
new file mode 100644
index 00000000..0a0f8f97
--- /dev/null
+++ b/docs/xci-criterias-cls.rst
@@ -0,0 +1,74 @@
+.. _xci-criterias-cls:
+
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. SPDX-License-Identifier: CC-BY-4.0
+.. (c) Fatih Degirmenci (fatih.degirmenci@ericsson.com)
+
+=============================================
+XCI Promotion Criterias and Confidence Levels
+=============================================
+
+This document is structured in a way to explain the current Promotion Criterias and Confidence
+Levels XCI uses to test and promote the scenarios. This is followed by other chapters to
+start the conversation around how these criterias can be improved depending on the features
+and scenarios that are onboarded to XCI or declared interest in participating.
+
+The expectation is to update this document collaboratively with the feature projects, scenario
+owners, XCI team, test projects and release management to find right/sufficient/necessary
+level of testing that are relevant to the features and scenarios.
+
+This document should be seen as guidance for the projects taking part in XCI until
+the OPNFV CD-Based Release Model and the criterias set for the CI Loops for that track
+become available. Until this happens, CI Loops will be constructed/updated by taking input
+from this document to provide feedback to the projects based on the test scope set by the
+projects themselves.
+
+The CD-Based Release Model will supersede the information and criterias set in this document.
+
+Existing CI Loops and Promotion Criterias
+=========================================
+
+XCI determined various CI Loops that run for the scenarios that take part in XCI.
+These loops are
+
+* verify
+* post-merge
+
+Currently, XCI uses verify and post-merge loops to verify the changes and promote
+the scenarios to the next loop in the CI Flow as candidates. The details of what
+is done by each loop currently are listed below.
+
+verify
+------
+
+The changes and subsequent patches enter this pipeline and get verified against
+the most basic criteria OPNFV has.
+
+* virtual noha deployment
+* functest healthcheck
+
+The checks done within this loop is common for all the scenarios and features no matter if
+they are OpenStack or Kubernetes scenarios.
+
+The changes that get Verified+1 from this pipeline is deemed to be good and
+can be merged to master if there is sufficient +2 votes from the XCI and/or project committers.
+
+post-merge
+----------
+
+The changes that are merged to master enter this pipeline and get verified
+against the same criteria as the verify pipeline.
+
+* virtual noha deployment
+* functest healthcheck
+
+The checks done within this loop is common for all the scenarios no matter if
+they are OpenStack or Kubernetes scenarios.
+
+The changes that are successfully verified get promoted for the next loop in
+the pipeline.
+
+Evolving CI Loops and Promotion Criterias
+=========================================
+
+TBD
diff --git a/docs/xci-overview.rst b/docs/xci-overview.rst
index 575eb37c..9b225ec1 100644
--- a/docs/xci-overview.rst
+++ b/docs/xci-overview.rst
@@ -138,7 +138,7 @@ Multi-distro Support
--------------------
Giving choice and not imposing things on developers and users are two
-of the important aspects of XCI. This means that if they want to have all in one
+of the important aspects of XCI. This means that if they want to have smaller
deployments, they should be able to do that by using
:ref:`different flavors <sandbox-flavors>` provided by XCI.
diff --git a/docs/xci-user-guide.rst b/docs/xci-user-guide.rst
index 7a411257..5e76ca16 100644
--- a/docs/xci-user-guide.rst
+++ b/docs/xci-user-guide.rst
@@ -41,6 +41,7 @@ The sandbox provides
* multiple OPNFV scenarios to install
* ability to select different versions of upstream components to base the work on
* ability to enable additional OpenStack services or disable others
+* ability to install kubernetes with different network plugins
One last point to highlight here is that the XCI itself uses the sandbox for
development and test purposes so it is continuously tested to ensure it works
@@ -50,10 +51,11 @@ purposes.
Components of the Sandbox
===================================
-The sandbox uses OpenStack projects for VM node creation, provisioning
-and OpenStack installation. XCI Team provides playbooks, roles, and scripts
-to ensure the components utilized by the sandbox work in a way that serves
-the users in the best possible way.
+The sandbox uses OpenStack tools for VM node creation and provisioning.
+OpenStack and Kubernetes installations are done using the tools from corresponding
+upstream projects with no changes to them. XCI Team provides playbooks,
+roles, and scripts to ensure the components utilized by the sandbox
+work in a way that serves the users in the best possible way.
* **openstack/bifrost:** Bifrost (pronounced bye-frost) is a set of Ansible
playbooks that automates the task of deploying a base image onto a set
@@ -70,6 +72,13 @@ the users in the best possible way.
More information about this project can be seen on
`OpenStack Ansible documentation <https://docs.openstack.org/developer/openstack-ansible/>`_.
+* **kubernetes-incubator/kubespray:** Kubespray is a composition of Ansible playbooks,
+ inventory, provisioning tools, and domain knowledge for generic Kubernetes
+ clusters configuration management tasks. The aim of kubespray is deploying a
+ production ready Kubernetes cluster.
+ More information about this project can be seen on
+ `Kubespray documentation <https://kubernetes.io/docs/getting-started-guides/kubespray/>`_.
+
* **opnfv/releng-xci:** OPNFV Releng Project provides additional scripts, Ansible
playbooks and configuration options in order for developers to have an easy
way of using openstack/bifrost and openstack/openstack-ansible by just
@@ -85,29 +94,24 @@ deployed using VM nodes.
Available flavors are listed on the table below.
-+------------------+------------------------+---------------------+-------------------------+
-| Flavor | Number of VM Nodes | VM Specs Per Node | Time Estimates |
-+==================+========================+=====================+=========================+
-| All in One (aio) | | 1 VM Node | | vCPUs: 8 | | Provisioning: 10 mins |
-| | | controller & compute | | RAM: 12GB | | Deployment: 90 mins |
-| | | on single/same node | | Disk: 80GB | | Total: 100 mins |
-| | | 1 compute node | | NICs: 1 | | |
-+------------------+------------------------+---------------------+-------------------------+
-| Mini | | 3 VM Nodes | | vCPUs: 6 | | Provisioning: 12 mins |
-| | | 1 deployment node | | RAM: 12GB | | Deployment: 65 mins |
-| | | 1 controller node | | Disk: 80GB | | Total: 77 mins |
-| | | 1 compute node | | NICs: 1 | | |
-+------------------+------------------------+---------------------+-------------------------+
-| No HA | | 4 VM Nodes | | vCPUs: 6 | | Provisioning: 12 mins |
-| | | 1 deployment node | | RAM: 12GB | | Deployment: 70 mins |
-| | | 1 controller node | | Disk: 80GB | | Total: 82 mins |
-| | | 2 compute nodes | | NICs: 1 | | |
-+------------------+------------------------+---------------------+-------------------------+
-| HA | | 6 VM Nodes | | vCPUs: 6 | | Provisioning: 15 mins |
-| | | 1 deployment node | | RAM: 12GB | | Deployment: 105 mins |
-| | | 3 controller nodes | | Disk: 80GB | | Total: 120 mins |
-| | | 2 compute nodes | | NICs: 1 | | |
-+------------------+------------------------+---------------------+-------------------------+
++------------------+------------------------+---------------------+--------------------------+--------------------------+
+| Flavor | Number of VM Nodes | VM Specs Per Node | Time Estimates Openstack | Time Estimates Kubernetes|
++==================+========================+=====================+==========================+==========================+
+| Mini | | 3 VM Nodes | | vCPUs: 6 | | Provisioning: 12 mins | | Provisioning: 12 mins |
+| | | 1 deployment node | | RAM: 12GB | | Deployment: 65 mins | | Deployment: 35 mins |
+| | | 1 controller node | | Disk: 80GB | | Total: 77 mins | | Total: 47 mins |
+| | | 1 compute node | | NICs: 1 | | | | |
++------------------+------------------------+---------------------+--------------------------+--------------------------+
+| No HA | | 4 VM Nodes | | vCPUs: 6 | | Provisioning: 12 mins | | Provisioning: 12 mins |
+| | | 1 deployment node | | RAM: 12GB | | Deployment: 70 mins | | Deployment: 35 mins |
+| | | 1 controller node | | Disk: 80GB | | Total: 82 mins | | Total: 47 mins |
+| | | 2 compute nodes | | NICs: 1 | | | | |
++------------------+------------------------+---------------------+--------------------------+--------------------------+
+| HA | | 6 VM Nodes | | vCPUs: 6 | | Provisioning: 15 mins | | Provisioning: 15 mins |
+| | | 1 deployment node | | RAM: 12GB | | Deployment: 105 mins | | Deployment: 40 mins |
+| | | 3 controller nodes | | Disk: 80GB | | Total: 120 mins | | Total: 55 mins |
+| | | 2 compute nodes | | NICs: 1 | | | | |
++------------------+------------------------+---------------------+--------------------------+--------------------------+
The specs for VMs are configurable and the more vCPU/RAM the better.
@@ -122,8 +126,8 @@ depending on
* installed/activated OpenStack services
* internet connection bandwidth
-Flavor Layouts
---------------
+Flavor Layouts - OpenStack Based Deployments
+--------------------------------------------
All flavors are created and deployed based on the upstream OpenStack Ansible (OSA)
guidelines.
@@ -141,14 +145,6 @@ ongoing.
The differences between the flavors are documented below.
-**All in One**
-
-As shown on the table in the previous section, this flavor consists of a single
-node. All the OpenStack services, including compute run on the same node.
-
-The flavor All in One (aio) is deployed based on the process described in the
-upstream documentation. Please check `OpenStack Ansible Developer Quick Start <https://docs.openstack.org/openstack-ansible/pike/contributor/quickstart-aio.html>`_ for details.
-
**Mini/No HA/HA**
These flavors consist of multiple nodes.
@@ -165,6 +161,38 @@ flavors.
.. image:: images/arch-layout-test.png
:scale: 75 %
+Flavor Layouts - Kubernetes Based Deployments
+---------------------------------------------
+
+All flavors are created and deployed based on the upstream kubespray guidelines.
+
+For network plugins, calico is used. flannel, weaver, contive, canal and cilium
+are supported currently
+
+The differences between the flavors are documented below.
+
+**Mini/No HA/HA**
+
+These flavors consist of multiple nodes.
+
+* **opnfv**: This node is used for driving the installation towards target nodes
+ in order to ensure the deployment process is isolated from the physical host
+ and always done on a clean machine.
+* **master**: provide the kubernetes cluster’s control plane.
+* **node**: a worker machine in Kubernetes, previously known as a minion.
+
+HA flavor has 3 master nodes and a load balancer is set up as part of the deployment process.
+The access to the Kubernetes cluster is done through the load balancer.
+
+Please see the diagrams below for the host and service layout for these
+flavors.
+
+.. image:: images/arch-layout-k8s-noha.png
+ :scale: 75 %
+
+.. image:: images/arch-layout-k8s-ha.png
+ :scale: 75 %
+
User Guide
==========
@@ -200,12 +228,17 @@ How to Use
| ``cd releng-xci/xci``
-4. Execute the sandbox script
+4. If you want to deploy Kubernetes based scenario, set the variables as below. Otherwise skip.
+
+ | ``export INSTALLER_TYPE=kubespray``
+ | ``export DEPLOY_SCENARIO=k8-nosdn-nofeature``
+
+5. Execute the sandbox script
| ``./xci-deploy.sh``
Issuing above command will start the sandbox deployment using the default
-flavor ``aio`` and the verified versions of upstream components.
+flavor ``mini`` and the verified versions of upstream components.
(`pinned-versions <https://git.opnfv.org/releng-xci/tree/xci/config/pinned-versions>`_).
The sandbox should be ready between 1,5 and 2 hours depending on the host
machine.
@@ -241,8 +274,14 @@ default.
5. Set the version to use for openstack-ansible
+ 1) if deploying OpenStack based scenario
+
| ``export OPENSTACK_OSA_VERSION=master``
+ 2) if deploying Kubernetes based scenario
+
+ | ``export KUBESPRAY_VERSION=master``
+
6. Set where the logs should be stored
| ``export LOG_PATH=/home/jenkins/xcilogs``
@@ -256,7 +295,7 @@ behaviors, especially if it is changed to ``master``. If you are not
sure about how good the version you intend to use is, it is advisable to
use the pinned versions instead.
-**Verifying the Basic Operation**
+**Verifying the Openstack Basic Operation**
You can verify the basic operation using the commands below.
@@ -276,6 +315,23 @@ You can also access the Horizon UI by using the URL, username, and
the password displayed on your console upon the completion of the
deployment.
+**Verifying the Kubernetes Basic Operation**
+
+You can verify the basic operation using the commands below.
+
+1. Login to opnfv host
+
+ | ``ssh root@192.168.122.2``
+
+2. Issue kubectl commands
+
+ | ``kubectl get nodes``
+
+You can also access the Kubernetes Dashboard UI by using the URL,
+username, and the password displayed on your console upon the
+completion of the deployment.
+
+
**Debugging Tips**
If ``xci-deploy.sh`` fails midway through and you happen to fix whatever
@@ -295,11 +351,12 @@ Here are steps that take place upon the execution of the sandbox script
2. Installs ansible on the host where sandbox script is executed.
3. Creates and provisions VM nodes based on the flavor chosen by the user.
4. Configures the host where the sandbox script is executed.
-5. Configures the deployment host which the OpenStack installation will
- be driven from.
-6. Configures the target hosts where OpenStack will be installed.
-7. Configures the target hosts as controller(s) and compute(s) nodes.
-8. Starts the OpenStack installation.
+5. Configures the deployment host which the OpenStack/Kubernetes
+ installation will be driven from.
+6. Configures the target hosts where OpenStack/Kubernetes will be installed.
+7. Configures the target hosts as controller(s)/compute(s) or master(s)/worker(s)
+ depending on the deployed scenario.
+8. Starts the OpenStack/Kubernetes installation.
.. image:: images/xci-basic-flow.png
:height: 640px