summaryrefslogtreecommitdiffstats
path: root/docs/specs
diff options
context:
space:
mode:
Diffstat (limited to 'docs/specs')
-rw-r--r--docs/specs/infra_manager.rst130
-rw-r--r--docs/specs/k8-calico-onap.rst141
-rw-r--r--docs/specs/k8-odl-coe.rst105
3 files changed, 376 insertions, 0 deletions
diff --git a/docs/specs/infra_manager.rst b/docs/specs/infra_manager.rst
new file mode 100644
index 00000000..a8ecb548
--- /dev/null
+++ b/docs/specs/infra_manager.rst
@@ -0,0 +1,130 @@
+PDF and IDF support in XCI
+###########################
+:date: 2018-04-30
+
+This spec introduces the work required to adapt XCI to use PDF and IDF which
+will be used for virtual and baremetal deployments
+
+Definition of Terms
+===================
+* Baremetal deployment: Deployment on physical servers as opposed to deploying
+software on virtual machines or containers running in the same physical server
+
+* Virtual deployment: Deployment on virtual machines, i.e. the servers where
+nodes will be deployed are virtualized. For example, in OpenStack, computes and
+controllers will be virtual machines. This deployment is normally done on just
+one physical server
+
+* PDF: It stands for POD Descriptor File, which is a document that lists the
+hardware characteristics of a set of physical or virtual machines which form
+the infrastructure. Example:
+
+https://git.opnfv.org/pharos/tree/config/pdf/pod1.yaml
+
+* IDF: It stands for Installer Descriptor File, which is a document that
+includes useful information for the installers to accomplish the baremetal
+deployment. Example:
+
+https://git.opnfv.org/fuel/tree/mcp/config/labs/local/idf-pod1.yaml
+
+Problem description
+===================
+
+Currently, XCI only supports virtualized deployments running in one server. This
+is good when the user has limited resources, however, baremetal is the preferred
+way to deploy NFV platforms in lab or production environments. Besides, this
+limits the scope of the testing greatly because we cannot test NFV hardware
+specific features such as SRIOV.
+
+Proposed change
+===============
+
+Introduce the infra_manager tool which will prepare the infrastructure for XCI
+to drive the deployment in a set of virtual or baremetal nodes. This tool will
+execute two tasks:
+
+1 - Creation of virtual nodes or initialization of the preparations for
+baremetal nodes
+2 - OS provisioning on nodes, both virtual or baremetal
+
+Once those steps are ready, XCI will continue with the deployment of the
+scenario on the provisioned nodes.
+
+The infra_manager tool will consume the PDF and IDF files describing the
+infrastructure as input. It will then use a <yet-to-be-created-tool> to do
+step 1 and bifrost to boot the Operating System in the nodes.
+
+Among other services Bifrost uses:
+- Disk image builder (dib) to generate the OS images
+- dnsmasq as the DHCP server which will provide the pxe boot mechanism
+- ipmitool to manage the servers
+
+Bifrost will be deployed inside a VM in the jumphost.
+
+For the time being, we will create the infrastructure based on the defined XCI
+flavors, however, the implementation should not hinder the possibility of
+having one pdf and idf per scenario, defining the characteristics and the
+number of nodes to be deployed.
+
+Code impact
+-----------
+
+The new code will be introduced in a new directory called infra_manager under
+releng-xci/xci/prototypes
+
+Tentative User guide
+--------------------
+
+Assuming the user cloned releng-xci in the jumphost, the following should be
+done:
+
+1 - Move the idf and pdf files which describe the infrastructure to
+releng-xci/xci/prototypes/infra_manager/var. There is an example under xci/var
+
+2 - Export the XCI_FLAVOR variable (e.g. export XCI_FLAVOR=noha)
+
+3 - Run the <yet-to-be-created-tool> to create the virtual nodes based on the
+provided PDF information (cpu, ram, disk...) or initialize the preparations for
+baremetal nodes
+
+4 - Start the bifrost process to boot the nodes
+
+5 - Run the VIM deployer script:
+releng-xci/xci/installer/$inst/deploy.sh
+
+where $inst = {osa, kubespray, kolla}
+
+In case of problems, the best way to debug is accessing the bifrost vm and use:
+
+* bifrost-utils
+* ipmitool
+* check the DHCP messages in /var/log/syslog
+
+
+Implementation
+==============
+
+Assignee(s)
+-----------
+
+Primary assignee:
+ Manuel Buil (mbuil)
+ Jack Morgan (jmorgan1)
+ Somebody_else_please (niceperson)
+
+Work items
+----------
+
+1. Provide support for a dynamically generated inventory based on PDF and IDF.
+This mechanism could be used for both baremetal and virtual deployments.
+
+2. Contribute the servers-prepare.sh script
+
+3. Contribute the nodes-deploy.sh script
+
+4. Integrate the three previous components correctly
+
+5. Provide support for the XCI supported operating systems (opensuse, Ubuntu,
+centos)
+
+6. Allow pdf and idf per scenario
diff --git a/docs/specs/k8-calico-onap.rst b/docs/specs/k8-calico-onap.rst
new file mode 100644
index 00000000..445e5c71
--- /dev/null
+++ b/docs/specs/k8-calico-onap.rst
@@ -0,0 +1,141 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. SPDX-License-Identifier: CC-BY-4.0
+.. Copyright 2018 Intel Corporation
+
+.. Links
+.. _Open Networking Automation Platform: https://www.onap.org/
+.. _ONAP metric analysis: https://onap.biterg.io/
+.. _ONAP on Kubernetes: http://onap.readthedocs.io/en/latest/submodules/oom.git/docs/oom_quickstart_guide.html
+.. _Helm: https://docs.helm.sh/
+.. _ONAP on OpenStack: https://wiki.onap.org/display/DW/ONAP+Installation+in+Vanilla+OpenStack
+.. _OOM Minimum Hardware Configuration: http://onap.readthedocs.io/en/latest/submodules/oom.git/docs/oom_cloud_setup_guide.html#minimum-hardware-configuration
+.. _OOM Software Requirements: http://onap.readthedocs.io/en/latest/submodules/oom.git/docs/oom_cloud_setup_guide.html#software-requirements
+.. _seed code: https://gitlab.com/Orange-OpenSource/onap_oom_automatic_installation
+.. _Orange ONAP OOM Deployment Resource Requirements: https://gitlab.com/Orange-OpenSource/kubespray_automatic_installation/blob/521fa87b20fdf4643f30fc28e5d70bdf9f1c98f3/vars/pdf.yaml
+
+This spec introduces the work required to include the XCI scenario
+for `Open Networking Automation Platform`_ (ONAP) through the ONAP
+Operations Manager(OOM) tool. This tool provides the ability to manage
+the entire life-cycle of an ONAP installation on top of a Kubernetes
+deployment.
+
+Problem description
+===================
+According to the `ONAP metric analysis`_, more than 26K commit
+changes have been submited since its announcement. Every patchset
+that is merged raises a Jenkins Job for the creation and deployment
+of a Docker container image for the corresponding service. Those new
+images are consumed by deployment methods like `ONAP on Kubernetes`_
+and `ONAP on OpenStack`_) during the installation of ONAP services.
+
+Given that ONAP is constantly changing, an early issue detected can
+be crucial for ensuring the proper operation of OOM tool.
+
+Minimum Hardware Requirements
+=============================
+
+Initially, No HA flavor will be the only supported flavor in order to
+bring a reference implementation of the scenario. Support for other
+flavors will be introduced based on this implementation.
+
+According to the `OOM Minimum Hardware Configuration`_, ONAP requires
+large amount of resources, especially on Kubernetes Worker nodes.
+
+Given that No HA flavor has multiple worker nodes, the containers can
+be distributed between the nodes resulting in a smaller footprint of
+of resources.
+
+The No HA scenario consists of 1 Kubernetes master node and 2 Kubernetes
+Worker nodes. Total resource requirements should be calculated based on
+the number of nodes.
+
+This recommendation is work in progress and based on Orange
+implementation which can be seen from
+`Orange ONAP OOM Deployment Resource Requirements`_.
+The resource requirements are subject to change and the scenario will
+be updated as necessary.
+
+Hardware for Kubernetes Master Node(s)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+* RAM: 8GB
+* HD: 150GB
+* vCores: 8
+
+Hardware for Kubernetes Worker Node(s)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+* RAM: 64GB
+* HD: 80GB
+* vCores: 16
+
+Proposed change
+===============
+
+In order to guarantee the proper installation and validation of ONAP
+services, this spec proposes two phases that complements each other:
+
+1. Creation k8-calico-onap scenario for the installation of ONAP
+services. This new scenario will be designed to validate the
+installation process provided by OOM tool.
+2. Adding Integration tests for ensuring that ONAP is operating
+properly. This process should cover Design and Runtime phases.
+
+Code impact
+-----------
+New code will be created based on the existing k8-calico-nofeature
+scenario and will be placed in scenarios/k8-calico-onap directory
+in releng-xci-scenario repo. The ONAP installation should proceed
+once the VIM has been installed and before the OPNFV tests run.
+
+
+The default configuration for the virtual resources (4 vCores, 8GB RAM,
+and 100GB HD) offered by XCI does not satisfy the ONAP needs. The
+scenario override mechanism will be used to bring up nodes with
+the necessary amount of resources. This will be replaced by PDF and
+IDF once they become available. PDF and IDF implementation is a
+separate work item and it is not expected as dependency for the
+implementation of this scenario.
+
+Software Requirements
+---------------------
+
+OOM has gone through significant changes during Beijing release
+cycle. This resulted in changed way of installing ONAP.
+
+In its current release, new software is necessary to install ONAP
+as listed below and on `OOM Software Requirements`_..
+
+Helm: 2.8.x
+kubectl: 1.8.10
+
+The OOM also provides a Makefile that collects instructions for the
+creation of ONAP packages into the Tiller repository. To determine
+which ONAP services are going to be enabled, this configuration can
+be done by the OOM configuration, this new role will be placed in
+scenarios/k8-calico-onap/role/k8-calico-onap/tasks folder in
+releng-xci-scenario repository.
+
+Tentative User guide
+--------------------
+TBD
+
+Implementation
+==============
+The Orange team has been working on this scenario for a while, this
+new role can use and adapt their `seed code`_ during the implementation.
+
+Assignee(s)
+-----------
+
+Primary assignee:
+ Victor Morales (electrocucaracha)
+ Fatih Degirmenci (fdegir)
+ Jack Morgan (jmorgan1)
+
+Work items
+----------
+TBD
+
+Glossary
+--------
diff --git a/docs/specs/k8-odl-coe.rst b/docs/specs/k8-odl-coe.rst
new file mode 100644
index 00000000..cd29456c
--- /dev/null
+++ b/docs/specs/k8-odl-coe.rst
@@ -0,0 +1,105 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. SPDX-License-Identifier: CC-BY-4.0
+.. Copyright 2018 Ericsson AB and Others
+
+.. Links
+.. _OpenDaylight COE: https://wiki.opendaylight.org/view/COE:Main
+.. _setting-up-coe-dev-environment: https://github.com/opendaylight/coe/blob/master/docs/setting-up-coe-dev-environment.rst
+.. _ansible-opendaylight: https://git.opendaylight.org/gerrit/gitweb?p=integration/packaging/ansible-opendaylight.git;a=tree
+
+This spec proposes adding an k8-odl-coe XCI scenario for OpenDaylight as the
+networking provider for Kubernetes using the OpenDaylight COE (Container
+Orchestration Engine) and NetVirt projects.
+
+Problem Description
+===================
+
+Currently OpenDaylight's advanced networking capabilities are not leveraged
+with Kubernetes in any scenarios. This spec proposes a reference platform for
+deployments that want to use OpenDaylight as a networking backend for
+Kubernetes.
+
+Minimum Hardware Requirements
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Hardware for Kubernetes Master Node(s)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+* RAM: 16 GB (20 GB for ha flavor i.e. for OpenDaylight Clustering)
+* HD: 80 GB
+* vCores: 6
+
+Hardware for Kubernetes Worker Node(s)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+* RAM: 12 GB
+* HD: 80 GB
+* vCores: 6
+
+Supported XCI Sandbox Flavors
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+This scenario will support deployments on Mini, No HA and HA XCI Sandbox Flavors.
+
+Proposed Change
+===============
+
+1. Provide Pod Descriptor Files (PDF) and IDF (Installer Descriptor Files)
+ specific to this scenario to install Kubernetes with OpenDaylight COE.
+2. Introduce a new scenario k8-odl-coe in releng-xci-scenarios repository.
+3. Reuse the role from k8-nosdn-nofeature scenario to install Kubernetes.
+ It has kube_network_plugin option to 'cloud' in k8s-cluster.yml so that
+ Kubespray doesn't configure networking between pods. This enables
+ OpenDaylight to be chosen as a networking backend in steps 4-7.
+4. Enhance upstream `ansible-opendaylight`_ role to deploy OpenDaylight with
+ COE Watcher on k8s master node(s) and CNI plugin on the k8s master and
+ worker node(s).
+5. Add the required Ansible tasks in k8-odl-coe role to direct XCI and
+ ansible-opendaylight role to configure k8s with OpenDaylight as the
+ networking backend for pod connectivity.
+6. Run the Health Check by testing the pods' connectivity.
+
+The COE Watcher binary and COE CNI plugin are built from OpenDaylight COE
+source code. The user will have flexibility to choose its SHA from XCI's
+ansible-role-requirements.yml file.
+
+Code Impact
+-----------
+
+Code specific to the k8-odl-coe scenario will be added to the xci/scenarios
+directory of the releng-xci-scenarios repository.
+
+User Guide
+----------
+
+No user guide will be provided.
+
+Implementation
+==============
+
+See the Proposed Change section.
+
+Assignee(s)
+-----------
+
+Primary assignees:
+
+* Prem Sankar G (premsa)
+* Periyasamy Palanisamy (epalper)
+* Fatih Degirmenci (fdegir)
+
+Work Items
+----------
+
+1. Enhance the akka.conf.j2 in upstream ansible-opendaylight role to work
+ with k8s deployments (i.e. run ODL cluster on k8s master nodes).
+ Currently this works only for the deployments based on Openstack-Ansible.
+2. Enhance upstream ansible-opendaylight role to install odl-netvirt-coe and
+ odl-restconf Karaf features, build COE watcher and CNI plugin binaries
+ from source.
+3. Implement configure-kubenet.yml to choose OpenDaylight COE as the
+ networking backend.
+4. Implement Health Check tests.
+
+Glossary
+--------