diff options
author | kalyanreddy <reddyx.gundarapu@intel.com> | 2017-03-28 11:34:11 +0530 |
---|---|---|
committer | kalyanreddy <reddyx.gundarapu@intel.com> | 2017-03-28 11:35:49 +0530 |
commit | f1f3cc27f23bdde81c37d8142d4288d811bd5e45 (patch) | |
tree | 28aeabc93b7aed199fc123738d65d8f390a19a86 /docs/release | |
parent | 1f4ef5ee33f715c03a85a868f12e89744f889cff (diff) |
Update documentation structure.
This patch is used to update documentation structure.
Change-Id: I50d4ef4256ccfc57a0434123e7532a50000582cf
Co-Authored by:Srinivas <srinivas.atmakuri@tcs.com>
Co-Authored by:RajithaY<rajithax.yerrumsetty@intel.com>
Co-Authored by:shravani paladugula <shravanix.paladugula@intel.com>
Co-Authored by:Navya Bathula <navyax.bathula@intel.com>
Signed-off-by: Gundarapu Kalyan Reddy <reddyx.gundarapu@intel.com>
Diffstat (limited to 'docs/release')
69 files changed, 5104 insertions, 0 deletions
diff --git a/docs/release/configurationguide/abstract.rst b/docs/release/configurationguide/abstract.rst new file mode 100644 index 000000000..3693bcab7 --- /dev/null +++ b/docs/release/configurationguide/abstract.rst @@ -0,0 +1,16 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. +.. http://creativecommons.org/licenses/by/4.0 + +====================== +Configuration Abstract +====================== + +This document provides guidance for the configurations available in the +Danube release of OPNFV + +The release includes four installer tools leveraging different technologies; +Apex, Compass4nfv, Fuel and JOID, which deploy components of the platform. + +This document also includes the selection of tools and components including +guidelines for how to deploy and configure the platform to an operational +state. diff --git a/docs/release/configurationguide/configuration.options.render.rst b/docs/release/configurationguide/configuration.options.render.rst new file mode 100644 index 000000000..71a78af2b --- /dev/null +++ b/docs/release/configurationguide/configuration.options.render.rst @@ -0,0 +1,26 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. +.. http://creativecommons.org/licenses/by/4.0 + +====================== +Configuration Options +====================== + +OPNFV provides a variety of virtual infrastructure deployments called scenarios +designed to host virtualised network functions (VNF's). KVM4NFV scenarios +provide specific capabilities and/or components aimed to solve specific +problems for the deployment of VNF's. KVM4NFV scenario includes components +such as OpenStack,KVM etc. which includes different source components or +configurations. + +.. note:: + + * Each KVM4NFV `scenario`_ provides unique features and capabilities, it is + important to understand your target platform capabilities before installing + and configuring. This configuration guide outlines how to configure components + in order to enable the features required. + + * More deatils of kvm4nfv scenarios installation and description can be found in the `scenario guide`_ of kvm4nfv docs + +.. _scenario: http://artifacts.opnfv.org/kvmfornfv/docs/index.html#document-scenarios/kvmfornfv.scenarios.description + +.. _scenario guide: http://artifacts.opnfv.org/kvmfornfv/docs/index.html#document-scenarios/abstract diff --git a/docs/release/configurationguide/images/brahmaputrafeaturematrix.jpg b/docs/release/configurationguide/images/brahmaputrafeaturematrix.jpg Binary files differnew file mode 100644 index 000000000..0d2a12279 --- /dev/null +++ b/docs/release/configurationguide/images/brahmaputrafeaturematrix.jpg diff --git a/docs/release/configurationguide/images/brahmaputrascenariomatrix.jpg b/docs/release/configurationguide/images/brahmaputrascenariomatrix.jpg Binary files differnew file mode 100644 index 000000000..84fc87a76 --- /dev/null +++ b/docs/release/configurationguide/images/brahmaputrascenariomatrix.jpg diff --git a/docs/release/configurationguide/images/idle-idle-test.png b/docs/release/configurationguide/images/idle-idle-test.png Binary files differnew file mode 100644 index 000000000..c9831df1d --- /dev/null +++ b/docs/release/configurationguide/images/idle-idle-test.png diff --git a/docs/release/configurationguide/images/stress-idle-test.png b/docs/release/configurationguide/images/stress-idle-test.png Binary files differnew file mode 100644 index 000000000..111c2a7d2 --- /dev/null +++ b/docs/release/configurationguide/images/stress-idle-test.png diff --git a/docs/release/configurationguide/images/weather-clear.jpg b/docs/release/configurationguide/images/weather-clear.jpg Binary files differnew file mode 100644 index 000000000..011ad52e9 --- /dev/null +++ b/docs/release/configurationguide/images/weather-clear.jpg diff --git a/docs/release/configurationguide/images/weather-dash.jpg b/docs/release/configurationguide/images/weather-dash.jpg Binary files differnew file mode 100644 index 000000000..3bf98dd27 --- /dev/null +++ b/docs/release/configurationguide/images/weather-dash.jpg diff --git a/docs/release/configurationguide/images/weather-few-clouds.jpg b/docs/release/configurationguide/images/weather-few-clouds.jpg Binary files differnew file mode 100644 index 000000000..51994ee84 --- /dev/null +++ b/docs/release/configurationguide/images/weather-few-clouds.jpg diff --git a/docs/release/configurationguide/images/weather-overcast.jpg b/docs/release/configurationguide/images/weather-overcast.jpg Binary files differnew file mode 100644 index 000000000..bdc1e0487 --- /dev/null +++ b/docs/release/configurationguide/images/weather-overcast.jpg diff --git a/docs/release/configurationguide/index.rst b/docs/release/configurationguide/index.rst new file mode 100644 index 000000000..fa205f55a --- /dev/null +++ b/docs/release/configurationguide/index.rst @@ -0,0 +1,19 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. +.. http://creativecommons.org/licenses/by/4.0 + +.. _kvmfornfv-configguide: + +*************************** +Kvm4nfv Configuration Guide +*************************** + +Danube 1.0 +------------ + +.. toctree:: + :maxdepth: 2 + + ./abstract.rst + ./configuration.options.render.rst + ./low-latency.feature.configuration.description.rst + ./os-nosdn-kvm-ha.description.rst diff --git a/docs/release/configurationguide/low-latency.feature.configuration.description.rst b/docs/release/configurationguide/low-latency.feature.configuration.description.rst new file mode 100644 index 000000000..c53aa52f4 --- /dev/null +++ b/docs/release/configurationguide/low-latency.feature.configuration.description.rst @@ -0,0 +1,151 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. +.. http://creativecommons.org/licenses/by/4.0 + +============================================= +Low Latency Feature Configuration Description +============================================= + +Introduction +------------ +In KVM4NFV project, we focus on the KVM hypervisor to enhance it for NFV, by +looking at the following areas initially + +* Minimal Interrupt latency variation for data plane VNFs: + * Minimal Timing Variation for Timing correctness of real-time VNFs + * Minimal packet latency variation for data-plane VNFs +* Inter-VM communication, +* Fast live migration + +Configuration of Cyclictest +--------------------------- + +Cyclictest measures Latency of response to a stimulus. Achieving low latency +with the KVM4NFV project requires setting up a special test environment. +This environment includes the BIOS settings, kernel configuration, kernel +parameters and the run-time environment. + +* For more information regarding the test environment, please visit + https://wiki.opnfv.org/display/kvm/KVM4NFV+Test++Environment + https://wiki.opnfv.org/display/kvm/Nfv-kvm-tuning + +Pre-configuration activities +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Intel POD10 is currently used as OPNFV-KVM4NFV test environment. The rpm +packages from the latest build are downloaded onto Intel-Pod10 jump server +from artifact repository. Yardstick running in a ubuntu docker container +on Intel Pod10-jump server will configure the host(intel pod10 node1/node2 +based on job type), the guest and triggers the cyclictest on the guest using +below sample yaml file. + + +.. code:: bash + + For IDLE-IDLE test, + + host_setup_seqs: + - "host-setup0.sh" + - "reboot" + - "host-setup1.sh" + - "host-run-qemu.sh" + + guest_setup_seqs: + - "guest-setup0.sh" + - "reboot" + - "guest-setup1.sh" + +.. figure:: images/idle-idle-test.png + :name: idle-idle-test + :width: 100% + :align: center + +.. code:: bash + + For [CPU/Memory/IO]Stress-IDLE tests, + + host_setup_seqs: + - "host-setup0.sh" + - "reboot" + - "host-setup1.sh" + - "stress_daily.sh" [cpustress/memory/io] + - "host-run-qemu.sh" + + guest_setup_seqs: + - "guest-setup0.sh" + - "reboot" + - "guest-setup1.sh" + +.. figure:: images/stress-idle-test.png + :name: stress-idle-test + :width: 100% + :align: center + +The following scripts are used for configuring host and guest to create a +special test environment and achieve low latency. + +Note: host-setup0.sh, host-setup1.sh and host-run-qemu.sh are run on the host, +followed by guest-setup0.sh and guest-setup1.sh scripts on the guest VM. + +**host-setup0.sh**: Running this script will install the latest kernel rpm +on host and will make necessary changes as following to create special test +environment. + + * Isolates CPUs from the general scheduler + * Stops timer ticks on isolated CPUs whenever possible + * Stops RCU callbacks on isolated CPUs + * Enables intel iommu driver and disables DMA translation for devices + * Sets HugeTLB pages to 1GB + * Disables machine check + * Disables clocksource verification at runtime + +**host-setup1.sh**: Running this script will make the following test +environment changes. + + * Disabling watchdogs to reduce overhead + * Disabling RT throttling + * Reroute interrupts bound to isolated CPUs to CPU 0 + * Change the iptable so that we can ssh to the guest remotely + +**stress_daily.sh**: Scripts gets triggered only for stress-idle tests. Running this script +make the following environment changes. + + * Triggers stress_script.sh, which runs the stress command with necessary options + * CPU,Memory or IO stress can be applied based on the test type + * Applying stress only on the Host is handled in D-Release + * For Idle-Idle test the stress script is not triggered + * Stress is applied only on the free cores to prevent load on qemu process + + **Note:** + - On Numa Node 1: 22,23 cores are allocated for QEMU process + - 24-43 are used for applying stress + +**host-run-qemu.sh**: Running this script will launch a guest vm on the host. + Note: download guest disk image from artifactory. + +**guest-setup0.sh**: Running this scrcipt on the guest vm will install the +latest build kernel rpm, cyclictest and make the following configuration on +guest vm. + + * Isolates CPUs from the general scheduler + * Stops timer ticks on isolated CPUs whenever possible + * Uses polling idle loop to improve performance + * Disables clocksource verification at runtime + +**guest-setup1.sh**: Running this script on guest vm will do the following +configurations. + + * Disable watchdogs to reduce overhead + * Routes device interrupts to non-RT CPU + * Disables RT throttling + +Hardware configuration +~~~~~~~~~~~~~~~~~~~~~~ + +Currently Intel POD10 is used as test environment for kvm4nfv to execute +cyclictest. As part of this test environment Intel pod10-jump is configured as +jenkins slave and all the latest build artifacts are downloaded on to it. + +* For more information regarding hardware configuration, please visit + https://wiki.opnfv.org/display/pharos/Intel+Pod10 + https://build.opnfv.org/ci/computer/intel-pod10/ + http://artifacts.opnfv.org/octopus/brahmaputra/docs/octopus_docs/opnfv-jenkins-slave-connection.html diff --git a/docs/release/configurationguide/scenariomatrix.rst b/docs/release/configurationguide/scenariomatrix.rst new file mode 100644 index 000000000..3da38ed60 --- /dev/null +++ b/docs/release/configurationguide/scenariomatrix.rst @@ -0,0 +1,129 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. + +.. http://creativecommons.org/licenses/by/4.0 + +============== +Scenariomatrix +============== + +Scenarios are implemented as deployable compositions through integration with an installation tool. +OPNFV supports multiple installation tools and for any given release not all tools will support all +scenarios. While our target is to establish parity across the installation tools to ensure they +can provide all scenarios, the practical challenge of achieving that goal for any given feature and +release results in some disparity. + +Danube scenario overeview +^^^^^^^^^^^^^^^^^^^^^^^^^ + +The following table provides an overview of the installation tools and available scenario's +in the Danube release of OPNFV. + +Scenario status is indicated by a weather pattern icon. All scenarios listed with +a weather pattern are possible to deploy and run in your environment or a Pharos lab, +however they may have known limitations or issues as indicated by the icon. + +Weather pattern icon legend: + ++---------------------------------------------+----------------------------------------------------------+ +| Weather Icon | Scenario Status | ++=============================================+==========================================================+ +| .. image:: images/weather-clear.jpg | Stable, no known issues | ++---------------------------------------------+----------------------------------------------------------+ +| .. image:: images/weather-few-clouds.jpg | Stable, documented limitations | ++---------------------------------------------+----------------------------------------------------------+ +| .. image:: images/weather-overcast.jpg | Deployable, stability or feature limitations | ++---------------------------------------------+----------------------------------------------------------+ +| .. image:: images/weather-dash.jpg | Not deployed with this installer | ++---------------------------------------------+----------------------------------------------------------+ + +Scenarios that are not yet in a state of "Stable, no known issues" will continue to be stabilised +and updates will be made on the stable/danube branch. While we intend that all Danube +scenarios should be stable it is worth checking regularly to see the current status. Due to +our dependency on upstream communities and code some issues may not be resolved prior to the D release. + +Scenario Naming +^^^^^^^^^^^^^^^ + +In OPNFV scenarios are identified by short scenario names, these names follow a scheme that +identifies the key components and behaviours of the scenario. The rules for scenario naming are as follows: + +.. code:: bash + + os-[controller]-[feature]-[mode]-[option] + +Details of the fields are + + * **[os]:** mandatory + + * Refers to the platform type used + * possible value: os (OpenStack) + + * **[controller]:** mandatory + + * Refers to the SDN controller integrated in the platform + * example values: nosdn, ocl, odl, onos + + * **[feature]:** mandatory + + * Refers to the feature projects supported by the scenario + * example values: nofeature, kvm, ovs, sfc + + * **[mode]:** mandatory + + * Refers to the deployment type, which may include for instance high availability + * possible values: ha, noha + + * **[option]:** optional + + * Used for the scenarios those do not fit into naming scheme. + * The optional field in the short scenario name should not be included if there is no optional scenario. + +Some examples of supported scenario names are: + + * **os-nosdn-kvm-noha** + + * This is an OpenStack based deployment using neutron including the OPNFV enhanced KVM hypervisor + + * **os-onos-nofeature-ha** + + * This is an OpenStack deployment in high availability mode including ONOS as the SDN controller + + * **os-odl_l2-sfc** + + * This is an OpenStack deployment using OpenDaylight and OVS enabled with SFC features + + * **os-nosdn-kvm_nfv_ovs_dpdk-ha** + + * This is an Openstack deployment with high availability using OVS, DPDK including the OPNFV enhanced KVM hypervisor + * This deployment has ``3-Contoller and 2-Compute nodes`` + + * **os-nosdn-kvm_nfv_ovs_dpdk-noha** + + * This is an Openstack deployment without high availability using OVS, DPDK including the OPNFV enhanced KVM hypervisor + * This deployment has ``1-Contoller and 3-Compute nodes`` + + * **os-nosdn-kvm_nfv_ovs_dpdk_bar-ha** + + * This is an Openstack deployment with high availability using OVS, DPDK including the OPNFV enhanced KVM hypervisor + and Barometer + * This deployment has ``3-Contoller and 2-Compute nodes`` + + * **os-nosdn-kvm_nfv_ovs_dpdk_bar-noha** + + * This is an Openstack deployment without high availability using OVS, DPDK including the OPNFV enhanced KVM hypervisor + and Barometer + * This deployment has ``1-Contoller and 3-Compute nodes`` + +Installing your scenario +^^^^^^^^^^^^^^^^^^^^^^^^ + +There are two main methods of deploying your target scenario, one method is to follow this guide which will +walk you through the process of deploying to your hardware using scripts or ISO images, the other method is +to set up a Jenkins slave and connect your infrastructure to the OPNFV Jenkins master. + +For the purposes of evaluation and development a number of Danube scenarios are able to be deployed +virtually to mitigate the requirements on physical infrastructure. Details and instructions on performing +virtual deployments can be found in the installer specific installation instructions. + +To set up a Jenkins slave for automated deployment to your lab, refer to the `Jenkins slave connect guide. +<http://artifacts.opnfv.org/brahmaputra.1.0/docs/opnfv-jenkins-slave-connection.brahmaputra.1.0.html>`_ diff --git a/docs/release/glossary/kvmfornfv_glossary.rst b/docs/release/glossary/kvmfornfv_glossary.rst new file mode 100644 index 000000000..aed5a971e --- /dev/null +++ b/docs/release/glossary/kvmfornfv_glossary.rst @@ -0,0 +1,401 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. + +.. http://creativecommons.org/licenses/by/4.0 + +************** +OPNFV Glossary +************** + +Danube 1.0 +------------ + + +Contents +-------- + +This glossary provides a common definition of phrases and words commonly used +in OPNFV. + +-------- + +A +~ + +Arno + + A river running through Tuscany and the name of the first OPNFV release. + +API + + Application Programming Interface + +AVX2 + + Advanced Vector Extensions 2 is an instruction set extension for x86. + + +-------- + +B +~ + +Brahmaputra + + A river running through Asia and the name of the Second OPNFV release. + +Bios + + Basic Input/Output System + +Builds + + Build in Jenkins is a version of a program. + +Bogomips + + Bogomips is the number of million times per second a processor can do + absolutely nothing. + +-------- + +C +~ + +CAT + + Cache Automation Technology + +CentOS + + Community Enterprise Operating System is a Linux distribution + +CICD + + Continuous Integration and Continuous Deployment + +CLI + + Command Line Interface + +Colorado + + A river in Argentina and the name of the Third OPNFV release. + +Compute + + Compute is an OpenStack service which offers many configuration options + which may be deployment specific. + +Console + + Console is display screen. + +CPU + Central Processing Unit + +-------- + +D +~ + +Danube + + Danube is the fourth release of OPNFV and also a river in Europe + +Data plane + + The data plane is the part of a network that carries user traffic. + +Debian/deb + + Debian is a Unix-like computer operating system that is composed entirely of + free software. + +Docs + + Documentation/documents + +DPDK + + Data Plane Development Kit + +DPI + + Deep Packet Inspection + +DSCP + + Differentiated Services Code Point + +-------- + +F +~ + +Flavors + + Flavors are templates used to define VM configurations. + +Fuel + + Provides an intuitive, GUI-driven experience for deployment and management of OpenStack + +-------- + +H +~ + +Horizon + + Horizon is an OpenStack service which serves as an UI. + +Hypervisor + + A hypervisor, also called a virtual machine manager, is a program that allows + multiple operating systems to share a single hardware host. + +-------- + +I +~ + +IGMP + + Internet Group Management Protocol + +IOMMU + + Input-Output Memory Management Unit + +IOPS + + Input/Output Operations Per Second + +IRQ + + Interrupt ReQuest is an interrupt request sent from the hardware level to + the CPU. + +IRQ affinity + + IRQ affinity is the set of CPU cores that can service that interrupt. + +-------- + +J +~ + +Jenkins + + Jenkins is an open source continuous integration tool written in Java. + +JIRA + + JIRA is a bug tracking software. + +Jitter + + Time difference in packet inter-arrival time to their destination can be called jitter. + +JumpHost + + A jump host or jump server or jumpbox is a computer on a network typically + used to manage devices in a separate security zone. + +-------- + +K +~ + +Kernel + + The kernel is a computer program that constitutes the central core of a + computer's operating system. + +-------- + +L +~ + +Latency + + The amount of time it takes a packet to travel from source to destination is + Latency. + +libvirt + + libvirt is an open source API, daemon and management tool for managing + platform virtualization. + +-------- + +M +~ + +Migration + + Migration is the process of moving from the use of one operating environment + to another operating environment. + +-------- + +N +~ + +NFV + + Network Functions Virtualisation, an industry initiative to leverage + virtualisation technologies in carrier networks. + +NFVI + + Network Function Virtualization Infrastructure + +NIC + + Network Interface Controller + +NUMA + + Non-Uniform Memory Access + +-------- + +O +~ + +OPNFV + + Open Platform for NFV, an open source project developing an NFV reference + platform and features. + +-------- + +P +~ + +Pharos + + Is a lighthouse and is a project deals with developing an OPNFV lab + infrastructure that is geographically and technically diverse. + +Pipeline + + A suite of plugins in Jenkins that lets you orchestrate automation. + +Platform + + OPNFV provides an open source platform for deploying NFV solutions that + leverages investments from a community of developers and solution providers. + +Pools + + A Pool is a set of resources that are kept ready to use, rather than acquired + on use and released afterwards. + +-------- + +Q +~ + +Qemu + + QEMU is a free and open-source hosted hypervisor that performs hardware + virtualization. + +-------- + +R +~ + +RDMA + + Remote Direct Memory Access (RDMA) + +Rest-Api + + REST (REpresentational State Transfer) is an architectural style, and an + approach to communications that is often used in the development of web + services + +-------- + +S +~ + +Scaling + + Refers to altering the size. + +Slave + + Works with/for master.where master has unidirectional control over one or + more other devices. + +SR-IOV + + Single root IO- Virtualization. + +Spin locks + + A spinlock is a lock which causes a thread trying to acquire it to simply + wait in a loop while repeatedly checking if the lock is available. + +Storage + + Refers to computer components which store some data. + +-------- + +T +~ + +Tenant + + A Tenant is a group of users who share a common access with specific + privileges to the software instance. + +Tickless + + A tickless kernel is an operating system kernel in which timer interrupts + do not occur at regular intervals, but are only delivered as required. + +TSC + + Technical Steering Committee + +-------- + +V +~ + +VLAN + + A virtual local area network, typically an isolated ethernet network. + +VM + + Virtual machine, an emulation in software of a computer system. + +VNF + + Virtual network function, typically a networking application or function + running in a virtual environment. + +-------- + +X +~ + +XBZRLE + + Helps to reduce the network traffic by just sending the updated data + +-------- + +Y +~ + +Yardstick + + Yardstick is an infrastructure verification. It is an OPNFV testing project. diff --git a/docs/release/installationprocedure/abstract.rst b/docs/release/installationprocedure/abstract.rst new file mode 100644 index 000000000..a53450eff --- /dev/null +++ b/docs/release/installationprocedure/abstract.rst @@ -0,0 +1,11 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. + +.. http://creativecommons.org/licenses/by/4.0 + +******** +Abstract +******** + +This document will give the instructions to user on how to deploy available +KVM4NFV build scenario verfied for the Danube release of the OPNFV +platform. diff --git a/docs/release/installationprocedure/index.rst b/docs/release/installationprocedure/index.rst new file mode 100644 index 000000000..9d75307f4 --- /dev/null +++ b/docs/release/installationprocedure/index.rst @@ -0,0 +1,16 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. + +.. http://creativecommons.org/licenses/by/4.0 + +.. _kvmfornfv-installation: + +******************************** +KVM4NFV Installation instruction +******************************** + +.. toctree:: + :numbered: + :maxdepth: 2 + + abstract.rst + kvm4nfv-cicd.installation.instruction.rst diff --git a/docs/release/installationprocedure/kvm4nfv-cicd.installation.instruction.rst b/docs/release/installationprocedure/kvm4nfv-cicd.installation.instruction.rst new file mode 100644 index 000000000..dedcca34f --- /dev/null +++ b/docs/release/installationprocedure/kvm4nfv-cicd.installation.instruction.rst @@ -0,0 +1,107 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. + +.. http://creativecommons.org/licenses/by/4.0 + +================================ +KVM4NFV Installation Instruction +================================ + +Preparing the installation +-------------------------- + +The OPNFV project- KVM4NFV (https://gerrit.opnfv.org/gerrit/kvmfornfv.git) is +cloned first, to make the build scripts for Qemu & Kernel, Rpms and Debians +available. + +HW requirements +--------------- + +These build scripts are triggered on the Jenkins-Slave build server. Currently +Intel POD10 is used as test environment for kvm4nfv to execute cyclictest. As +part of this test environment Intel pod10-jump is configured as jenkins slave +and all the latest build artifacts are downloaded on to it. Intel pod10-node1 +is the host on which a guest vm will be launched as a part of running cylictest +through yardstick. + +Build instructions +------------------ + +Builds are possible for the following packages- + +**kvmfornfv source code** + +The ./ci/build.sh is the main script used to trigger +the Rpms (on 'centos') and Debians (on 'ubuntu') builds in this case. + +* How to build Kernel/Qemu Rpms- To build rpm packages, build.sh script is run + with -p and -o option (i.e. if -p package option is passed as "centos" or in + default case). Example: + +.. code:: bash + + cd kvmfornfv/ + + For Kernel/Qemu RPMs, + sh ./ci/build.sh -p centos -o build_output + +* How to build Kernel/Qemu Debians- To build debian packages, build.sh script + is run with -p and -o option (i.e. if -p package option is passed as + "ubuntu"). Example: + +.. code:: bash + + cd kvmfornfv/ + + For Kernel/Qemu Debians, + sh ./ci/build.sh -p ubuntu -o build_output + + +* How to build all Kernel & Qemu, Rpms & Debians- To build both debian and rpm + packages, build.sh script is run with -p and -o option (i.e. if -p package + option is passed as "both"). Example: + +.. code:: bash + + cd kvmfornfv/ + + For Kernel/Qemu RPMs and Debians, + sh ./ci/build.sh -p both -o build_output + +.. note:: Kvm4nfv can be installed in two ways + + 1. As part of a `scenario deployment`_ + 2. As a `stand alone`_ component + +.. _scenario deployment: http://artifacts.opnfv.org/kvmfornfv/docs/index.html#document-scenarios/kvmfornfv.scenarios.description +.. _stand alone: http://artifacts.opnfv.org/kvmfornfv/docs/index.html#build-instructions + +For installation of kvmfornfv as part of scenario deployment use this `link`_ + +.. code:: bash + + http://artifacts.opnfv.org/kvmfornfv/docs/index.html#document-scenarios/kvmfornfv.scenarios.description + + +Installation instructions +------------------------- + +Installation can be done in the following ways- + +**1. From kvmfornfv source code**- +The build packages that are prepared in the above section, are installed +differently depending on the platform. + +Please visit the links for each- + +* Centos : https://www.centos.org/docs/5/html/Deployment_Guide-en-US/s1-rpm-using.html +* Ubuntu : https://help.ubuntu.com/community/InstallingSoftware + +**2. Using Fuel installer**- + +* Please refer to the document present at /fuel-plugin/README.md + +Post-installation activities +---------------------------- + +After the packages are built, test these packages by executing the scripts +present in ci/envs for configuring the host and guest respectively. diff --git a/docs/release/installationprocedure/kvm4nfv-cicd.release.notes.rst b/docs/release/installationprocedure/kvm4nfv-cicd.release.notes.rst new file mode 100644 index 000000000..415182bc7 --- /dev/null +++ b/docs/release/installationprocedure/kvm4nfv-cicd.release.notes.rst @@ -0,0 +1,103 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. + +.. http://creativecommons.org/licenses/by/4.0 + +============================= +Release Note for KVM4NFV CICD +============================= + + +Abstract +-------- + +This document contains the release notes for the Danube release of OPNFV when using KVM4NFV CICD process. + +Introduction +------------ + +Provide a brief introduction of how this configuration is used in OPNFV release +using KVM4VFV CICD as scenario. + +Be sure to reference your scenario installation instruction. + +Release Data +------------ + ++--------------------------------------+--------------------------------------+ +| **Project** | NFV Hypervisors-KVM | +| | | ++--------------------------------------+--------------------------------------+ +| **Repo/tag** | kvmfornfv | +| | | ++--------------------------------------+--------------------------------------+ +| **Release designation** | | +| | | ++--------------------------------------+--------------------------------------+ +| **Release date** | 2017-03-27 | +| | | ++--------------------------------------+--------------------------------------+ +| **Purpose of the delivery** | - Automate the KVM4VFV CICD scenario | +| | - Executing latency test cases | +| | - Collection of logs for debugging | +| | | ++--------------------------------------+--------------------------------------+ + + +Document version change +----------------------- + +The following documents are added- + - configurationguide + - installationprocedure + - userguide + - overview + - glossary + - releasenotes + +Reason for new version +---------------------- + +Feature additions +~~~~~~~~~~~~~~~~~ + ++--------------------------------------+--------------------------------------+ +| **JIRA REFERENCE** | **SLOGAN** | +| | | ++--------------------------------------+--------------------------------------+ +| JIRA: | NFV Hypervisors-KVMFORNFV-34 | +| | | ++--------------------------------------+--------------------------------------+ +| JIRA: | NFV Hypervisors-KVMFORNFV-57 | +| | | ++--------------------------------------+--------------------------------------+ +| JIRA: | NFV Hypervisors-KVMFORNFV-58 | +| | | ++--------------------------------------+--------------------------------------+ +| JIRA: | NFV Hypervisors-KVMFORNFV-59 | +| | | ++--------------------------------------+--------------------------------------+ +| JIRA: | NFV Hypervisors-KVMFORNFV-60 | +| | | ++--------------------------------------+--------------------------------------+ + +Known issues +------------ + +**JIRA TICKETS:** + ++--------------------------------------+--------------------------------------+ +| **JIRA REFERENCE** | **SLOGAN** | +| | | ++--------------------------------------+--------------------------------------+ +| JIRA: | NFV Hypervisors-KVMFORNFV-75 | ++--------------------------------------+--------------------------------------+ + +Workarounds +----------- +See JIRA: https://jira.opnfv.org/projects + + +References +========== +For more information on the OPNFV Danube release, please visit +http://www.opnfv.org/danube diff --git a/docs/release/releasenotes/index.rst b/docs/release/releasenotes/index.rst new file mode 100644 index 000000000..3cf19f32f --- /dev/null +++ b/docs/release/releasenotes/index.rst @@ -0,0 +1,13 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. +.. http://creativecommons.org/licenses/by/4.0 + +.. _kvmfornfv-releasenotes: + +======================================== +KVM4NFV Release Notes for Danube Release +======================================== + +.. toctree:: + :maxdepth: 2 + + release-notes diff --git a/docs/release/releasenotes/release-notes.rst b/docs/release/releasenotes/release-notes.rst new file mode 100644 index 000000000..c52b4b839 --- /dev/null +++ b/docs/release/releasenotes/release-notes.rst @@ -0,0 +1,280 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. +.. http://creativecommons.org/licenses/by/4.0 + +.. _Kvmfornfv: https://wiki.opnfv.org/display/kvm/ + +============= +Release Notes +============= + +Abstract +--------- + +This document provides the release notes for Danube 1.0 release of KVM4NFV. + + +**Contents** + + **1 Version History** + + **2 Important notes** + + **3 Summary** + + **4 Delivery Data** + + **5 References** + +Version history +--------------- + ++--------------------+--------------------+--------------------+----------------------+ +| **Date** | **Ver.** | **Author** | **Comment** | +| | | | | ++--------------------+--------------------+--------------------+----------------------+ +|2016-08-22 | 0.1.0 | | Colorado 1.0 release | +| | | | | ++--------------------+--------------------+--------------------+----------------------+ +|2017-03-27 | 0.1.0 | | Danube 1.0 release | +| | | | | ++--------------------+--------------------+--------------------+----------------------+ + +Important notes +--------------- + +The KVM4NFV project is currently supported on the Fuel installer. + +Summary +------- + +This Danube 1.0 release provides *KVM4NFV* as a framework to enhance the +KVM Hypervisor for NFV and OPNFV scenario testing, automated in the OPNFV +CI pipeline, including: + +* KVMFORNFV source code + +* Automation of building the Kernel and qemu for RPM and debian packages + +* Cyclictests execution to check the latency + +* “os-nosdn-kvm-ha”,“os-nosdn-kvm_nfv_ovs_dpdk-ha”,“os-nosdn-kvm_nfv_ovs_dpdk-noha”,“os-nosdn-kvm_nfv_ovs_dpdk_bar-ha”, + “os-nosdn-kvm_nfv_ovs_dpdk_bar-noha” Scenarios testing for ``high availability/no-high avaliability`` + configuration using Fuel installer + +* Documentation created for, + + * User Guide + + * Configuration Guide + + * Installation Procedure + + * Release notes + + * Scenarios Guide + + * Design Guide + + * Requirements Guide + +The *KVM4NFV framework* is developed in the OPNFV community, by the +KVM4NFV_ team. + +Release Data +------------ + ++--------------------------------------+--------------------------------------+ +| **Project** | NFV Hypervisors-KVM | +| | | ++--------------------------------------+--------------------------------------+ +| **Repo/commit-ID** | kvmfornfv | +| | | ++--------------------------------------+--------------------------------------+ +| **Release designation** | Danube | +| | | ++--------------------------------------+--------------------------------------+ +| **Release date** | 2017-03-27 | +| | | ++--------------------------------------+--------------------------------------+ +| **Purpose of the delivery** | OPNFV Danube 1.0 Releases | +| | | ++--------------------------------------+--------------------------------------+ + +Version change +-------------- + +1 Module version changes +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +This is the Danube 1.0 main release. It is based on following upstream +versions: + +* RT Kernel 4.4.50-rt62 + +* QEMU 2.6 + +* Fuel plugin based on Fuel 10.0 + +This is the second tracked release of KVM4NFV + + +2 Document version changes +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +This is the second version of the KVM4NFV framework in OPNFV. + +Reason for version +------------------ + +1 Feature additions +~~~~~~~~~~~~~~~~~~~ + ++--------------------------------------+--------------------------------------+ +| **JIRA REFERENCE** | **SLOGAN** | +| | | ++--------------------------------------+--------------------------------------+ +| JIRA: | NFV Hypervisors-KVMFORNFV-57 | +| | | ++--------------------------------------+--------------------------------------+ +| JIRA: | NFV Hypervisors-KVMFORNFV-58 | +| | | ++--------------------------------------+--------------------------------------+ +| JIRA: | NFV Hypervisors-KVMFORNFV-59 | +| | | ++--------------------------------------+--------------------------------------+ +| JIRA: | NFV Hypervisors-KVMFORNFV-61 | +| | | ++--------------------------------------+--------------------------------------+ +| JIRA: | NFV Hypervisors-KVMFORNFV-62 | +| | | ++--------------------------------------+--------------------------------------+ +| JIRA: | NFV Hypervisors-KVMFORNFV-63 | +| | | ++--------------------------------------+--------------------------------------+ +| JIRA: | NFV Hypervisors-KVMFORNFV-64 | +| | | ++--------------------------------------+--------------------------------------+ +| JIRA: | NFV Hypervisors-KVMFORNFV-65 | +| | | ++--------------------------------------+--------------------------------------+ + +A brief ``Description of the the JIRA tickets``: + ++---------------------------------------+-------------------------------------------------------------+ +| **JIRA REFERENCE** | **DESCRIPTION** | +| | | ++---------------------------------------+-------------------------------------------------------------+ +| KVMFORNFV-57 | CI/CD Integration into Yardstick | +| | | ++---------------------------------------+-------------------------------------------------------------+ +| KVMFORNFV-58 | Complete the integration of test plan into Yardstick | +| | and Jenkins infrastructure to include latency testing | +| | | ++---------------------------------------+-------------------------------------------------------------+ +| KVMFORNFV-59 | Enable capability to publish results on Yardstick Dashboard | +| | | ++---------------------------------------+-------------------------------------------------------------+ +| KVMFORNFV-61 | Define and integrate additional scenario - KVM+OVS+DPDK | +| | with HA and NOHA for baremetal and virtual environments | +| | | ++---------------------------------------+-------------------------------------------------------------+ +| KVMFORNFV-62 | Define and integrate additional scenario - KVM+OVS+DPDK+BAR | +| | with HA and NOHA for bare metal and virtual environments | +| | | ++---------------------------------------+-------------------------------------------------------------+ +| KVMFORNFV-63 | Setup Local fuel environment | +| | | ++---------------------------------------+-------------------------------------------------------------+ +| KVMFORNFV-64 | Fuel environment setup for local machine to debug Fuel | +| | related integration issues | ++---------------------------------------+-------------------------------------------------------------+ + +Deliverables +------------ + +1 Software deliverables +~~~~~~~~~~~~~~~~~~~~~~~~~ +* Danube 1.0 release of the KVM4NFV RPM and debian for kvm4nfv + +* Added the following scenarios as part of D-Release: + + * os-nosdn-kvm_nfv_ovs_dpdk-noha + + * os-nosdn-kvm_nfv_ovs_dpdk_bar-noha + + * os-nosdn-kvm_nfv_ovs_dpdk-ha + + * os-nosdn-kvm_nfv_ovs_dpdk_bar-ha + +* Configured influxdb and `Graphana dashboard`_ for publishing kvm4nfv test results + +.. _Graphana_dashboard: http://testresults.opnfv.org/grafana/dashboard/db/kvmfornfv-cyclictest + +* Cyclictest test case is successfully implemented, it has the below test types., + + * idle-idle + + * CPUstress-idle + + * IOstress-idle + + * Memorystress-idle + +* Implemented Noisy Neighbour feature ., cyclictest under stress testing is implemented + +* Packet forwarding test case is implemented and it supports the following test types currently, + + * Packet forwarding to Host + + * Packet forwarding to Guest + + * Packet forwarding to Guest using SRIOV + +* Ftrace debugging tool is supported in D-Release. The logs collected by ftrace are stored in artifacts for future needs + +* PCM Utility is part of D-Release. The future scope may include collection of read/write data and publishing in grafana + +* Either Apex or Fuel can be used for deployment of os-nosdn-kvm-ha scenario + ++------------------------------------------+------------------+-----------------+ +| **Scenario Name** | **Apex** | **Fuel** | +| | | | ++==========================================+==================+=================+ +| - os-nosdn-kvm-ha | ``Y`` | ``Y`` | ++------------------------------------------+------------------+-----------------+ +| - os-nosdn-kvm_nfv_ovs_dpdk-noha | | ``Y`` | ++------------------------------------------+------------------+-----------------+ +| - os-nosdn-kvm_nfv_ovs_dpdk-ha | | ``Y`` | ++------------------------------------------+------------------+-----------------+ +| - os-nosdn-kvm_nfv_ovs_dpdk_bar-noha | | ``Y`` | ++------------------------------------------+------------------+-----------------+ +| - os-nosdn-kvm_nfv_ovs_dpdk_bar-ha | | ``Y`` | ++------------------------------------------+------------------+-----------------+ + +* Future scope may include adding Apex support for all the remaining scenarios + +* The below documents are delivered for Danube KVM4NFV Release: + + * User Guide + + * Configuration Guide + + * Installation Procedure + + * Overview + + * Release notes + + * Glossary + + * Scenarios + + * Requirements Guide + + * Overview Guide + +References +---------- + +For more information on the KVM4NFV Danube release, please see: + +https://wiki.opnfv.org/display/kvm/ diff --git a/docs/release/scenarios/.kvmfornfv.scenarios.description.rst.swp b/docs/release/scenarios/.kvmfornfv.scenarios.description.rst.swp Binary files differnew file mode 100644 index 000000000..b6ef17624 --- /dev/null +++ b/docs/release/scenarios/.kvmfornfv.scenarios.description.rst.swp diff --git a/docs/release/scenarios/abstract.rst b/docs/release/scenarios/abstract.rst new file mode 100644 index 000000000..dcdd62fa9 --- /dev/null +++ b/docs/release/scenarios/abstract.rst @@ -0,0 +1,42 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. +.. http://creativecommons.org/licenses/by/4.0 + +***************** +Scenario Abstract +***************** +This chapter includes detailed explanation of various sceanrios files deployed as part +of kvm4nfv D-Release. + +Release Features +---------------- + ++------------------------------------------+------------------+-----------------+ +| **Scenario Name** | **Colorado** | **Danube** | +| | | | ++==========================================+==================+=================+ +| - os-nosdn-kvm-ha | ``Y`` | ``Y`` | ++------------------------------------------+------------------+-----------------+ +| - os-nosdn-kvm_nfv_ovs_dpdk-noha | | ``Y`` | ++------------------------------------------+------------------+-----------------+ +| - os-nosdn-kvm_nfv_ovs_dpdk-ha | | ``Y`` | ++------------------------------------------+------------------+-----------------+ +| - os-nosdn-kvm_nfv_ovs_dpdk_bar-noha | | ``Y`` | ++------------------------------------------+------------------+-----------------+ +| - os-nosdn-kvm_nfv_ovs_dpdk_bar-ha | | ``Y`` | ++------------------------------------------+------------------+-----------------+ + +D- Release Scenario's overview +------------------------------- + ++------------------------------------------+-----------------------+---------------------+------------------+----------+----------+ +| **Scenario Name** | **No of Controllers** | **No of Computes** | **Plugin Names** | **DPDK** | **OVS** | +| | | | | | | ++==========================================+=======================+=====================+==================+==========+==========+ +| - ``os-nosdn-kvm_nfv_ovs_dpdk-noha`` | 1 | 3 | KVM | Y | Y | ++------------------------------------------+-----------------------+---------------------+------------------+----------+----------+ +| - ``os-nosdn-kvm_nfv_ovs_dpdk-ha`` | 3 | 2 | KVM | Y | Y | ++------------------------------------------+-----------------------+---------------------+------------------+----------+----------+ +| - ``os-nosdn-kvm_nfv_ovs_dpdk_bar-noha`` | 1 | 3 | KVM & BAR | Y | Y | ++------------------------------------------+-----------------------+---------------------+------------------+----------+----------+ +| - ``os-nosdn-kvm_nfv_ovs_dpdk_bar-ha`` | 3 | 2 | KVM & BAR | Y | Y | ++------------------------------------------+-----------------------+---------------------+------------------+----------+----------+ diff --git a/docs/release/scenarios/index.rst b/docs/release/scenarios/index.rst new file mode 100644 index 000000000..f1f93c31a --- /dev/null +++ b/docs/release/scenarios/index.rst @@ -0,0 +1,60 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. +.. http://creativecommons.org/licenses/by/4.0 + +.. _kvmfornfv-scenarios: + +********************************* +Scenario Overview and Description +********************************* + +.. toctree:: + :caption: Scenario Overview and Description + :numbered: + :maxdepth: 4 + + ./abstract.rst + ./kvmfornfv.scenarios.description.rst + +******************************************************* +os-nosdn-kvm_nfv_ovs_dpdk-noha Overview and Description +******************************************************* + +.. toctree:: + :caption: os-nosdn-kvm_nfv_ovs_dpdk-noha + :numbered: + :maxdepth: 3 + + ./os-nosdn-kvm_nfv_ovs_dpdk-noha/os-nosdn-kvm_nfv_ovs_dpdk-noha.description.rst + +***************************************************** +os-nosdn-kvm_nfv_ovs_dpdk-ha Overview and Description +***************************************************** + +.. toctree:: + :caption: os-nosdn-kvm_nfv_ovs_dpdk-ha + :numbered: + :maxdepth: 3 + + ./os-nosdn-kvm_nfv_ovs_dpdk-ha/os-nosdn-kvm_nfv_ovs_dpdk-ha.description.rst + +*********************************************************** +os-nosdn-kvm_nfv_ovs_dpdk_bar-noha Overview and Description +*********************************************************** + +.. toctree:: + :caption: os-nosdn-kvm_nfv_ovs_dpdk_bar-noha + :numbered: + :maxdepth: 3 + + ./os-nosdn-kvm_nfv_ovs_dpdk_bar-noha/os-nosdn-kvm_nfv_ovs_dpdk_bar-noha.description.rst + +********************************************************* +os-nosdn-kvm_nfv_ovs_dpdk_bar-ha Overview and Description +********************************************************* + +.. toctree:: + :caption: os-nosdn-kvm_nfv_ovs_dpdk_bar-ha + :numbered: + :maxdepth: 3 + + ./os-nosdn-kvm_nfv_ovs_dpdk_bar-ha/os-nosdn-kvm_nfv_ovs_dpdk_bar-ha.description.rst diff --git a/docs/release/scenarios/kvmfornfv.scenarios.description.rst b/docs/release/scenarios/kvmfornfv.scenarios.description.rst new file mode 100644 index 000000000..5a5328666 --- /dev/null +++ b/docs/release/scenarios/kvmfornfv.scenarios.description.rst @@ -0,0 +1,493 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. + +.. http://creativecommons.org/licenses/by/4.0 + +.. _scenario-guide: + +============================ +KVM4NFV Scenario-Description +============================ + +Abstract +-------- + +This document describes the procedure to deploy/test KVM4NFV scenarios in a nested virtualization +environment. This has been verified with os-nosdn-kvm-ha, os-nosdn-kvm-noha,os-nosdn-kvm_ovs_dpdk-ha, +os-nosdn-kvm_ovs_dpdk-noha and os-nosdn-kvm_ovs_dpdk_bar-ha test scenarios. + +Version Features +---------------- + ++-----------------------------+---------------------------------------------+ +| | | +| **Release** | **Features** | +| | | ++=============================+=============================================+ +| | - Scenario Testing feature was not part of | +| Colorado | the Colorado release of KVM4NFV | +| | | ++-----------------------------+---------------------------------------------+ +| | - High Availability/No-High Availability | +| | deployment configuration of KVM4NFV | +| | software suite | +| Danube | - Multi-node setup with 3 controller and | +| | 2 compute nodes are deployed for HA | +| | - Multi-node setup with 1 controller and | +| | 3 compute nodes are deployed for NO-HA | +| | - Scenarios os-nosdn-kvm_ovs_dpdk-ha, | +| | os-nosdn-kvm_ovs_dpdk_bar-ha, | +| | os-nosdn-kvm_ovs_dpdk-noha, | +| | os-nosdn-kvm_ovs_dpdk_bar-noha | +| | are supported | ++-----------------------------+---------------------------------------------+ + + +Introduction +------------ +The purpose of os-nosdn-kvm_ovs_dpdk-ha,os-nosdn-kvm_ovs_dpdk_bar-ha and +os-nosdn-kvm_ovs_dpdk-noha,os-nosdn-kvm_ovs_dpdk_bar-noha scenarios testing is to +test the High Availability/No-High Availability deployment and configuration of +OPNFV software suite with OpenStack and without SDN software. + +This OPNFV software suite includes OPNFV KVM4NFV latest software packages +for Linux Kernel and QEMU patches for achieving low latency and also OPNFV Barometer for traffic, +performance and platform monitoring. + +High Availability feature is achieved by deploying OpenStack +multi-node setup with 1 Fuel-Master,3 controllers and 2 computes nodes. + +No-High Availability feature is achieved by deploying OpenStack +multi-node setup with 1 Fuel-Master,1 controllers and 3 computes nodes. + +KVM4NFV packages will be installed on compute nodes as part of deployment. +The scenario testcase deploys a multi-node setup by using OPNFV Fuel deployer. + +System pre-requisites +--------------------- + +- RAM - Minimum 16GB +- HARD DISK - Minimum 500GB +- Linux OS installed and running +- Nested Virtualization enabled, which can be checked by, + +.. code:: bash + + $ cat /sys/module/kvm_intel/parameters/nested + Y + + $ cat /proc/cpuinfo | grep vmx + +*Note:* +If Nested virtualization is disabled, enable it by, + +.. code:: bash + + For Ubuntu: + $ modeprobe kvm_intel + $ echo Y > /sys/module/kvm_intel/parameters/nested + $ sudo reboot + + For RHEL: + $ cat << EOF > /etc/modprobe.d/kvm_intel.conf + options kvm-intel nested=1 + options kvm-intel enable_shadow_vmcs=1 + options kvm-intel enable_apicv=1 + options kvm-intel ept=1 + EOF + $ cat << EOF > /etc/sysctl.d/98-rp-filter.conf + net.ipv4.conf.default.rp_filter = 0 + net.ipv4.conf.all.rp_filter = 0 + EOF + $ sudo reboot + +Environment Setup +----------------- + +**Configuring Proxy** +~~~~~~~~~~~~~~~~~~~~~ + +For **Ubuntu**., +Create an apt.conf file in /etc/apt if it doesn't exist. Used to set proxy for apt-get if working behind a proxy server. + +.. code:: bash + + Acquire::http::proxy "http://<username>:<password>@<proxy>:<port>/"; + Acquire::https::proxy "https://<username>:<password>@<proxy>:<port>/"; + Acquire::ftp::proxy "ftp://<username>:<password>@<proxy>:<port>/"; + Acquire::socks::proxy "socks://<username>:<password>@<proxy>:<port>/"; + +For **CentOS**., +Edit /etc/yum.conf to work behind a proxy server by adding the below line. + +.. code:: bash + + $ echo "proxy=http://<username>:<password>@<proxy>:<port>/" >> /etc/yum.conf + +**Network Time Protocol (NTP) setup and configuration** +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Install ntp by: + +.. code:: bash + + $ sudo apt-get update + $ sudo apt-get install -y ntp + +Insert the following two lines after “server ntp.ubuntu.com” line and before “ # Access control configuration; see `link`_ for” line in /etc/ntp.conf file: + +.. _link: /usr/share/doc/ntp-doc/html/accopt.html + +.. code:: bash + + server 127.127.1.0 + fudge 127.127.1.0 stratum 10 + +Restart the ntp server to apply the changes + +.. code:: bash + + $ sudo service ntp restart + +Scenario Testing +---------------- + +There are three ways of performing scenario testing, + - 1 Fuel + - 2 OPNFV-Playground + - 3 Jenkins Project + +Fuel +~~~~ + +**1 Clone the fuel repo :** + +.. code:: bash + + $ git clone https://gerrit.opnfv.org/gerrit/fuel.git + +**2 Checkout to the specific version of the branch to deploy by:** + +The default branch is master, to use a stable release-version use the below., + +.. code:: bash + To check the current branch + $ git branch + + To check out a specific branch + $ git checkout stable/Colorado + +**3 Building the Fuel iso :** + +.. code:: bash + + $ cd ~/fuel/ci/ + $ ./build.sh -h + +Provide the necessary options that are required to build an iso. +Create a ``customized iso`` as per the deployment needs. + +.. code:: bash + + $ cd ~/fuel/build/ + $ make + +(OR) Other way is to download the latest stable fuel iso from `here`_. + +.. _here: http://artifacts.opnfv.org/fuel.html + +.. code:: bash + + http://artifacts.opnfv.org/fuel.html + +**4 Creating a new deployment scenario** + +``(i). Naming the scenario file`` + +Include the new deployment scenario yaml file in ~/fuel/deploy/scenario/. The file name should adhere to the following format: + +.. code:: bash + + <ha | no-ha>_<SDN Controller>_<feature-1>_..._<feature-n>.yaml + +``(ii). Meta data`` + +The deployment configuration file should contain configuration metadata as stated below: + +.. code:: bash + + deployment-scenario-metadata: + title: + version: + created: + +``(iii). “stack-extentions” Module`` + +To include fuel plugins in the deployment configuration file, use the “stack-extentions” key: + +.. code:: bash + + Example: + stack-extensions: + - module: fuel-plugin-collectd-ceilometer + module-config-name: fuel-barometer + module-config-version: 1.0.0 + module-config-override: + #module-config overrides + +**Note:** +The “module-config-name” and “module-config-version” should be same as the name of plugin configuration file. + +The “module-config-override” is used to configure the plugin by overrriding the corresponding keys in +the plugin config yaml file present in ~/fuel/deploy/config/plugins/. + +``(iv). “dea-override-config” Module`` + +To configure the HA/No-HA mode, network segmentation types and role to node assignments, use the “dea-override-config” key. + +.. code:: bash + + Example: + dea-override-config: + environment: + mode: ha + net_segment_type: tun + nodes: + - id: 1 + interfaces: interfaces_1 + role: mongo,controller,opendaylight + - id: 2 + interfaces: interfaces_1 + role: mongo,controller + - id: 3 + interfaces: interfaces_1 + role: mongo,controller + - id: 4 + interfaces: interfaces_1 + role: ceph-osd,compute + - id: 5 + interfaces: interfaces_1 + role: ceph-osd,compute + settings: + editable: + storage: + ephemeral_ceph: + description: Configures Nova to store ephemeral volumes in RBD. This works best if Ceph is enabled for volumes and images, too. Enables live migration of all types of Ceph backed VMs (without this option, live migration will only work with VMs launched from Cinder volumes). + label: Ceph RBD for ephemeral volumes (Nova) + type: checkbox + value: true + weight: 75 + images_ceph: + description: Configures Glance to use the Ceph RBD backend to store images.If enabled, this option will prevent Swift from installing. + label: Ceph RBD for images (Glance) + restrictions: + - settings:storage.images_vcenter.value == true: Only one Glance backend could be selected. + type: checkbox + value: true + weight: 30 + +Under the “dea-override-config” should provide atleast {environment:{mode:'value},{net_segment_type:'value'} +and {nodes:1,2,...} and can also enable additional stack features such ceph,heat which overrides +corresponding keys in the dea_base.yaml and dea_pod_override.yaml. + +``(v). “dha-override-config” Module`` + +In order to configure the pod dha definition, use the “dha-override-config” key. +This is an optional key present at the ending of the scenario file. + +``(vi). Mapping to short scenario name`` + +The scenario.yaml file is used to map the short names of scenario's to the one or more deployment scenario configuration yaml files. +The short scenario names should follow the scheme below: + +.. code:: bash + + [os]-[controller]-[feature]-[mode]-[option] + + [os]: mandatory + possible value: os + +Please note that this field is needed in order to select parent jobs to list and do blocking relations between them. + +.. code:: bash + + + [controller]: mandatory + example values: nosdn, ocl, odl, onos + + [mode]: mandatory + possible values: ha, noha + + [option]: optional + +Used for the scenarios those do not fit into naming scheme. +Optional field in the short scenario name should not be included if there is no optional scenario. + +.. code:: bash + + Example: + 1. os-nosdn-kvm-noha + 2. os-nosdn-kvm_ovs_dpdk_bar-ha + + +Example of how short scenario names are mapped to configuration yaml files: + +.. code:: bash + + os-nosdn-kvm_ovs_dpdk-ha: + configfile: ha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml + +Note: + +- ( - ) used for separator of fields. [os-nosdn-kvm_ovs_dpdk-ha] + +- ( _ ) used to separate the values belong to the same field. [os-nosdn-kvm_ovs_bar-ha]. + +**5 Deploying the scenario** + +Command to deploy the os-nosdn-kvm_ovs_dpdk-ha scenario: + +.. code:: bash + + $ cd ~/fuel/ci/ + $ sudo ./deploy.sh -f -b file:///tmp/opnfv-fuel/deploy/config -l devel-pipeline -p default -s ha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml -i file:///tmp/opnfv.iso + +where, + ``-b`` is used to specify the configuration directory + + ``-f`` is used to re-deploy on the existing deployment + + ``-i`` is used to specify the image downloaded from artifacts. + + ``-l`` is used to specify the lab name + + ``-p`` is used to specify POD name + + ``-s`` is used to specify the scenario file + +**Note:** + +.. code:: bash + + Check $ sudo ./deploy.sh -h for further information. + + +OPNFV-Playground +~~~~~~~~~~~~~~~~ + +Install OPNFV-playground (the tool chain to deploy/test CI scenarios in fuel@opnfv, ): + +.. code:: bash + + $ cd ~ + $ git clone https://github.com/jonasbjurel/OPNFV-Playground.git + $ cd OPNFV-Playground/ci_fuel_opnfv/ + +- Follow the README.rst in this ~/OPNFV-Playground/ci_fuel_opnfv sub-holder to complete all necessary installation and setup. +- Section “RUNNING THE PIPELINE” in README.rst explain how to use this ci_pipeline to deploy/test CI test scenarios, you can also use + +.. code:: bash + + ./ci_pipeline.sh --help ##to learn more options. + + + +``1 Downgrade paramiko package from 2.x.x to 1.10.0`` + +The paramiko package 2.x.x doesn’t work with OPNFV-playground tool chain now, Jira ticket FUEL - 188 has been raised for the same. + +Check paramiko package version by following below steps in your system: + +.. code:: bash + + $ python + Python 2.7.6 (default, Jun 22 2015, 17:58:13) [GCC 4.8.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. + + >>> import paramiko + >>> print paramiko.__version__ + >>> exit() + +You will get the current paramiko package version, if it is 2.x.x, uninstall this version by + +.. code:: bash + + $ sudo pip uninstall paramiko + +Ubuntu 14.04 LTS has python-paramiko package (1.10.0), install it by + +.. code:: bash + + $ sudo apt-get install python-paramiko + + +Verify it by following: + +.. code:: bash + + $ python + >>> import paramiko + >>> print paramiko.__version__ + >>> exit() + + +``2 Clone the fuel@opnfv`` + +Check out the specific version of specific branch of fuel@opnfv + +.. code:: bash + + $ cd ~ + $ git clone https://gerrit.opnfv.org/gerrit/fuel.git + $ cd fuel + By default it will be master branch, in-order to deploy on the Colorado/Danube branch, do: + $ git checkout stable/Danube + + +``3 Creating the scenario`` + +Implement the scenario file as described in 3.1.4 + +``4 Deploying the scenario`` + +You can use the following command to deploy/test os-nosdn kvm_ovs_dpdk-(no)ha and os-nosdn-kvm_ovs_dpdk_bar-(no)ha scenario + +.. code:: bash + + $ cd ~/OPNFV-Playground/ci_fuel_opnfv/ + +For os-nosdn-kvm_ovs_dpdk-ha : + +.. code:: bash + + $ ./ci_pipeline.sh -r ~/fuel -i /root/fuel.iso -B -n intel-sc -s os-nosdn-kvm_ovs_dpdk-ha + +For os-nosdn-kvm_ovs_dpdk_bar-ha: + +.. code:: bash + + $ ./ci_pipeline.sh -r ~/fuel -i /root/fuel.iso -B -n intel-sc -s os-nosdn-kvm_ovs_dpdk_bar-ha + +The “ci_pipeline.sh” first clones the local fuel repo, then deploys the +os-nosdn-kvm_ovs_dpdk-ha/os-nosdn-kvm_ovs_dpdk_bar-ha scenario from the given ISO, and run Functest +and Yarstick test. The log of the deployment/test (ci.log) can be found in +~/OPNFV-Playground/ci_fuel_opnfv/artifact/master/YYYY-MM-DD—HH.mm, where YYYY-MM-DD—HH.mm is the +date/time you start the “ci_pipeline.sh”. + +Note: + +.. code:: bash + + Check $ ./ci_pipeline.sh -h for further information. + + +Jenkins Project +~~~~~~~~~~~~~~~ + +os-nosdn-kvm_ovs_dpdk-(no)ha and os-nosdn-kvm_ovs_dpdk_bar-(no)ha scenario can be executed from the jenkins project : + + ``HA scenarios:`` + 1. "fuel-os-nosdn-kvm_ovs_dpdk-ha-baremetal-daily-master" (os-nosdn-kvm_ovs_dpdk-ha) + 2. "fuel-os-nosdn-kvm_ovs_dpdk_bar-ha-baremetal-daily-master" (os-nosdn-kvm_ovs_dpdk_bar-ha) + + ``NOHA scenarios:`` + 1. "fuel-os-nosdn-kvm_ovs_dpdk-noha-virtual-daily-master" (os-nosdn-kvm_ovs_dpdk-noha) + 2. "fuel-os-nosdn-kvm_ovs_dpdk_bar-noha-virtual-daily-master" (os-nosdn-kvm_ovs_dpdk_bar-noha) diff --git a/docs/release/scenarios/os-nosdn-kvm-ha/index.rst b/docs/release/scenarios/os-nosdn-kvm-ha/index.rst new file mode 100755 index 000000000..ab82d96b8 --- /dev/null +++ b/docs/release/scenarios/os-nosdn-kvm-ha/index.rst @@ -0,0 +1,14 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. +.. http://creativecommons.org/licenses/by/4.0 + +.. _kvmfornfv-os-nosdn-kvm-ha: + +**************************************** +os-nosdn-kvm-ha Overview and Description +**************************************** + +.. toctree:: + :numbered: + :maxdepth: 3 + + os-nosdn-kvm-ha.description.rst diff --git a/docs/release/scenarios/os-nosdn-kvm-ha/os-nosdn-kvm-ha.description.rst b/docs/release/scenarios/os-nosdn-kvm-ha/os-nosdn-kvm-ha.description.rst new file mode 100644 index 000000000..f64f26ffc --- /dev/null +++ b/docs/release/scenarios/os-nosdn-kvm-ha/os-nosdn-kvm-ha.description.rst @@ -0,0 +1,133 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. + +.. http://creativecommons.org/licenses/by/4.0 + +============================ +os-nosdn-kvm-ha Description +============================ + +Introduction +------------- + +.. In this section explain the purpose of the scenario and the + types of capabilities provided + +The purpose of os-nosdn-kvm-ha scenario testing is to test the +High Availability deployment and configuration of OPNFV software suite +with OpenStack and without SDN software. This OPNFV software suite +includes OPNFV KVM4NFV latest software packages for Linux Kernel and +QEMU patches for achieving low latency. High Availability feature is achieved +by deploying OpenStack multi-node setup with 3 controllers and 2 computes nodes + +KVM4NFV packages will be installed on compute nodes as part of deployment. +This scenario testcase deployment is happening on multi-node by using +OPNFV Fuel deployer. + +Scenario Components and Composition +----------------------------------- +.. In this section describe the unique components that make up the scenario, +.. what each component provides and why it has been included in order +.. to communicate to the user the capabilities available in this scenario. + +This scenario deploys the High Availability OPNFV Cloud based on the +configurations provided in ha_nfv-kvm_heat_ceilometer_scenario.yaml. +This yaml file contains following configurations and is passed as an +argument to deploy.py script + +* ``scenario.yaml:`` This configuration file defines translation between a + short deployment scenario name(os-nosdn-kvm-ha) and an actual deployment + scenario configuration file(ha_nfv-kvm_heat_ceilometer_scenario.yaml) + +* ``deployment-scenario-metadata:`` Contains the configuration metadata like + title,version,created,comment. + +* ``stack-extensions:`` Stack extentions are opnfv added value features in form + of a fuel-plugin.Plugins listed in stack extensions are enabled and + configured. + +* ``dea-override-config:`` Used to configure the HA mode,network segmentation + types and role to node assignments.These configurations overrides + corresponding keys in the dea_base.yaml and dea_pod_override.yaml. + These keys are used to deploy multiple nodes(3 controllers,2 computes) + as mention below. + + * **Node 1**: This node has MongoDB and Controller roles. The controller + node runs the Identity service, Image Service, management portions of + Compute and Networking, Networking plug-in and the dashboard. The + Telemetry service which was designed to support billing systems for + OpenStack cloud resources uses a NoSQL database to store information. + The database typically runs on the controller node. + + * **Node 2**: This node has Controller and Ceph-osd roles. Ceph is a + massively scalable, open source, distributed storage system. It is + comprised of an object store, block store and a POSIX-compliant distributed + file system. Enabling Ceph, configures Nova to store ephemeral volumes in + RBD, configures Glance to use the Ceph RBD backend to store images, + configures Cinder to store volumes in Ceph RBD images and configures the + default number of object replicas in Ceph. + + * **Node 3**: This node has Controller role in order to achieve high + availability. + + * **Node 4**: This node has Compute role. The compute node runs the + hypervisor portion of Compute that operates tenant virtual machines + or instances. By default, Compute uses KVM as the hypervisor. + + * **Node 5**: This node has compute role. + +* ``dha-override-config:`` Provides information about the VM definition and + Network config for virtual deployment.These configurations overrides + the pod dha definition and points to the controller,compute and + fuel definition files. + +* os-nosdn-kvm-ha scenario is successful when all the 5 Nodes are accessible, + up and running + +Scenario Usage Overview +----------------------- +.. Provide a brief overview on how to use the scenario and the features available to the +.. user. This should be an "introduction" to the userguide document, and explicitly link to it, +.. where the specifics of the features are covered including examples and API's + +* The high availability feature can be acheived by executing deploy.py with + ha_nfv-kvm_heat_ceilometer_scenario.yaml as an argument. +* Install Fuel Master and deploy OPNFV Cloud from scratch on Hardware + Environment: + + -Example: + +.. code:: bash + + sudo python deploy.py -iso ~/ISO/opnfv.iso -dea ~/CONF/hardware/dea.yaml -dha ~/CONF/hardware/dha.yaml -s /mnt/images -b pxebr -log ~/Deployment-888.log.tar.gz + +* Install Fuel Master and deploy OPNFV Cloud from scratch on Virtual + Environment: + + -Example: + +.. code:: bash + + sudo python deploy.py -iso ~/ISO/opnfv.iso -dea ~/CONF/virtual/dea.yaml -dha ~/CONF/virtual/dha.yaml -s /mnt/images -log ~/Deployment-888.log.tar.gz + +* os-nosdn-kvm-ha scenario can be executed from the jenkins project + "fuel-os-nosdn-kvm-ha-baremetal-daily-master" +* This scenario provides the High Availability feature by deploying + 3 controller,2 compute nodes and checking if all the 5 nodes + are accessible(IP,up & running). +* Test Scenario is passed if deployment is successful and all 5 nodes have + accessibility (IP , up & running). +* Observed that scenario is not running any testcase on top of deployment. + +Known Limitations, Issues and Workarounds +----------------------------------------- +.. Explain any known limitations here. + +* Test scenario os-nosdn-kvm-ha result is not stable. After node reboot + triggered by kvm plugin, sometimes puppet agent (mcollective) is not + responding with in the given time. + +References +---------- + +For more information on the OPNFV Danube release, please visit +http://www.opnfv.org/danube diff --git a/docs/release/scenarios/os-nosdn-kvm_nfv_ovs_dpdk-ha/index.rst b/docs/release/scenarios/os-nosdn-kvm_nfv_ovs_dpdk-ha/index.rst new file mode 100755 index 000000000..ddb6071c8 --- /dev/null +++ b/docs/release/scenarios/os-nosdn-kvm_nfv_ovs_dpdk-ha/index.rst @@ -0,0 +1,14 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. +.. http://creativecommons.org/licenses/by/4.0 + +.. _kvmfornfv-os-nosdn-kvm_nfv_ovs_dpdk-ha: + +***************************************************** +os-nosdn-kvm_nfv_ovs_dpdk-ha Overview and Description +***************************************************** + +.. toctree:: + :numbered: + :maxdepth: 3 + + ./os-nosdn-kvm_nfv_ovs_dpdk-ha.description.rst diff --git a/docs/release/scenarios/os-nosdn-kvm_nfv_ovs_dpdk-ha/os-nosdn-kvm_nfv_ovs_dpdk-ha.description.rst b/docs/release/scenarios/os-nosdn-kvm_nfv_ovs_dpdk-ha/os-nosdn-kvm_nfv_ovs_dpdk-ha.description.rst new file mode 100644 index 000000000..a96130cad --- /dev/null +++ b/docs/release/scenarios/os-nosdn-kvm_nfv_ovs_dpdk-ha/os-nosdn-kvm_nfv_ovs_dpdk-ha.description.rst @@ -0,0 +1,247 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. + +.. http://creativecommons.org/licenses/by/4.0 + +========================================= +os-nosdn-kvm_nfv_ovs_dpdk-ha Description +========================================= + +Introduction +------------ + +.. In this section explain the purpose of the scenario and the + types of capabilities provided + +The purpose of os-nosdn-kvm_ovs_dpdk-ha scenario testing is to test the +High Availability deployment and configuration of OPNFV software suite +with OpenStack and without SDN software. This OPNFV software suite +includes OPNFV KVM4NFV latest software packages for Linux Kernel and +QEMU patches for achieving low latency. High Availability feature is achieved +by deploying OpenStack multi-node setup with 3 controllers and 2 computes nodes. + +KVM4NFV packages will be installed on compute nodes as part of deployment. +This scenario testcase deployment is happening on multi-node by using OPNFV Fuel deployer. + +Scenario Components and Composition +----------------------------------- +.. In this section describe the unique components that make up the scenario, +.. what each component provides and why it has been included in order +.. to communicate to the user the capabilities available in this scenario. + +This scenario deploys the High Availability OPNFV Cloud based on the +configurations provided in ha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml. +This yaml file contains following configurations and is passed as an +argument to deploy.py script + +* ``scenario.yaml:`` This configuration file defines translation between a + short deployment scenario name(os-nosdn-kvm_ovs_dpdk-ha) and an actual deployment + scenario configuration file(ha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml) + +* ``deployment-scenario-metadata:`` Contains the configuration metadata like + title,version,created,comment. + +.. code:: bash + + deployment-scenario-metadata: + title: NFV KVM and OVS-DPDK HA deployment + version: 0.0.1 + created: Dec 20 2016 + comment: NFV KVM and OVS-DPDK + +* ``stack-extensions:`` Stack extentions are opnfv added value features in form + of a fuel-plugin.Plugins listed in stack extensions are enabled and + configured. os-nosdn-kvm_ovs_dpdk-ha scenario currently uses KVM-1.0.0 plugin. + +.. code:: bash + + stack-extensions: + - module: fuel-plugin-kvm + module-config-name: fuel-nfvkvm + module-config-version: 1.0.0 + module-config-override: + # Module config overrides + +* ``dea-override-config:`` Used to configure the HA mode,network segmentation + types and role to node assignments.These configurations overrides + corresponding keys in the dea_base.yaml and dea_pod_override.yaml. + These keys are used to deploy multiple nodes(``3 controllers,2 computes``) + as mention below. + + * **Node 1**: + - This node has MongoDB and Controller roles + - The controller node runs the Identity service, Image Service, management portions of + Compute and Networking, Networking plug-in and the dashboard + - Uses VLAN as an interface + + * **Node 2**: + - This node has Ceph-osd and Controller roles + - The controller node runs the Identity service, Image Service, management portions of + Compute and Networking, Networking plug-in and the dashboard + - Ceph is a massively scalable, open source, distributed storage system + - Uses VLAN as an interface + + * **Node 3**: + - This node has Controller role in order to achieve high availability. + - Uses VLAN as an interface + + * **Node 4**: + - This node has compute and Ceph-osd roles + - Ceph is a massively scalable, open source, distributed storage system + - By default, Compute uses KVM as the hypervisor + - Uses DPDK as an interface + + * **Node 5**: + - This node has compute and Ceph-osd roles + - Ceph is a massively scalable, open source, distributed storage system + - By default, Compute uses KVM as the hypervisor + - Uses DPDK as an interface + + The below is the ``dea-override-config`` of the ha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml file. + +.. code:: bash + + dea-override-config: + fuel: + FEATURE_GROUPS: + - experimental + nodes: + - id: 1 + interfaces: interfaces_1 + role: controller + - id: 2 + interfaces: interfaces_1 + role: mongo,controller + - id: 3 + interfaces: interfaces_1 + role: ceph-osd,controller + - id: 4 + interfaces: interfaces_dpdk + role: ceph-osd,compute + attributes: attributes_1 + - id: 5 + interfaces: interfaces_dpdk + role: ceph-osd,compute + attributes: attributes_1 + + attributes_1: + hugepages: + dpdk: + value: 1024 + nova: + value: + '2048': 1024 + + settings: + editable: + storage: + ephemeral_ceph: + description: Configures Nova to store ephemeral volumes in RBD. This works best if Ceph is enabled for volumes and images, too. Enables live migration of all types of Ceph backed VMs (without this option, live migration will only work with VMs launched from Cinder volumes). + label: Ceph RBD for ephemeral volumes (Nova) + type: checkbox + value: true + weight: 75 + images_ceph: + description: Configures Glance to use the Ceph RBD backend to store images. If enabled, this option will prevent Swift from installing. + label: Ceph RBD for images (Glance) + restrictions: + - settings:storage.images_vcenter.value == true: Only one Glance backend could be selected. + type: checkbox + value: true + weight: 30 + +* ``dha-override-config:`` Provides information about the VM definition and + Network config for virtual deployment.These configurations overrides + the pod dha definition and points to the controller,compute and + fuel definition files. + + The below is the ``dha-override-config`` of the ha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml file. + +.. code:: bash + + dha-override-config: + nodes: + - id: 1 + libvirtName: controller1 + libvirtTemplate: templates/virtual_environment/vms/controller.xml + - id: 2 + libvirtName: controller2 + libvirtTemplate: templates/virtual_environment/vms/controller.xml + - id: 3 + libvirtName: controller3 + libvirtTemplate: templates/virtual_environment/vms/controller.xml + - id: 4 + libvirtName: compute1 + libvirtTemplate: templates/virtual_environment/vms/compute.xml + - id: 5 + libvirtName: compute2 + libvirtTemplate: templates/virtual_environment/vms/compute.xml + - id: 6 + libvirtName: fuel-master + libvirtTemplate: templates/virtual_environment/vms/fuel.xml + isFuel: yes + username: root + password: r00tme + + +* os-nosdn-kvm_ovs_dpdk-ha scenario is successful when all the 5 Nodes are accessible, + up and running. + +**Note:** + +* In os-nosdn-kvm_ovs_dpdk-ha scenario, OVS is installed on the compute nodes with DPDK configured + +* Hugepages for DPDK are configured in the attributes_1 section of the no-ha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml + +* Hugepages are only configured for compute nodes + +* This results in faster communication and data transfer among the compute nodes + +Scenario Usage Overview +----------------------- +.. Provide a brief overview on how to use the scenario and the features available to the +.. user. This should be an "introduction" to the userguide document, and explicitly link to it, +.. where the specifics of the features are covered including examples and API's + +* The high availability feature can be acheived by executing deploy.py with + ha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml as an argument. +* Install Fuel Master and deploy OPNFV Cloud from scratch on Hardware + Environment: + + +Command to deploy the os-nosdn-kvm_ovs_dpdk-ha scenario: + +.. code:: bash + + $ cd ~/fuel/ci/ + $ sudo ./deploy.sh -f -b file:///tmp/opnfv-fuel/deploy/config -l devel-pipeline -p default -s ha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml -i file:///tmp/opnfv.iso + +where, + -b is used to specify the configuration directory + + -i is used to specify the image downloaded from artifacts. + +**Note:** + +.. code:: bash + + Check $ sudo ./deploy.sh -h for further information. + +* os-nosdn-kvm_ovs_dpdk-ha scenario can be executed from the jenkins project + "fuel-os-nosdn-kvm_ovs_dpdk-ha-baremetal-daily-master" +* This scenario provides the High Availability feature by deploying + 3 controller,2 compute nodes and checking if all the 5 nodes + are accessible(IP,up & running). +* Test Scenario is passed if deployment is successful and all 5 nodes have + accessibility (IP , up & running). + +Known Limitations, Issues and Workarounds +----------------------------------------- +.. Explain any known limitations here. + +* Test scenario os-nosdn-kvm_ovs_dpdk-ha result is not stable. + +References +---------- + +For more information on the OPNFV Danube release, please visit +http://www.opnfv.org/Danube diff --git a/docs/release/scenarios/os-nosdn-kvm_nfv_ovs_dpdk-noha/index.rst b/docs/release/scenarios/os-nosdn-kvm_nfv_ovs_dpdk-noha/index.rst new file mode 100755 index 000000000..742ddb1ee --- /dev/null +++ b/docs/release/scenarios/os-nosdn-kvm_nfv_ovs_dpdk-noha/index.rst @@ -0,0 +1,14 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. +.. http://creativecommons.org/licenses/by/4.0 + +.. _kvmfornfv-os-nosdn-kvm_nfv_ovs_dpdk-noha: + +******************************************************* +os-nosdn-kvm_nfv_ovs_dpdk-noha Overview and Description +******************************************************* + +.. toctree:: + :numbered: + :maxdepth: 3 + + ./os-nosdn-kvm_nfv_ovs_dpdk-noha.description.rst diff --git a/docs/release/scenarios/os-nosdn-kvm_nfv_ovs_dpdk-noha/os-nosdn-kvm_nfv_ovs_dpdk-noha.description.rst b/docs/release/scenarios/os-nosdn-kvm_nfv_ovs_dpdk-noha/os-nosdn-kvm_nfv_ovs_dpdk-noha.description.rst new file mode 100644 index 000000000..a7778d963 --- /dev/null +++ b/docs/release/scenarios/os-nosdn-kvm_nfv_ovs_dpdk-noha/os-nosdn-kvm_nfv_ovs_dpdk-noha.description.rst @@ -0,0 +1,239 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. + +.. http://creativecommons.org/licenses/by/4.0 + +========================================== +os-nosdn-kvm_nfv_ovs_dpdk-noha Description +========================================== + +Introduction +------------ + +.. In this section explain the purpose of the scenario and the + types of capabilities provided + +The purpose of os-nosdn-kvm_ovs_dpdk-noha scenario testing is to test the No +High Availability deployment and configuration of OPNFV software suite +with OpenStack and without SDN software. This OPNFV software suite +includes OPNFV KVM4NFV latest software packages for Linux Kernel and +QEMU patches for achieving low latency. No High Availability feature is achieved +by deploying OpenStack multi-node setup with 1 controller and 3 computes nodes. + +KVM4NFV packages will be installed on compute nodes as part of deployment. +This scenario testcase deployment is happening on multi-node by using OPNFV Fuel deployer. + +Scenario Components and Composition +------------------------------------ +.. In this section describe the unique components that make up the scenario, +.. what each component provides and why it has been included in order +.. to communicate to the user the capabilities available in this scenario. + +This scenario deploys the No High Availability OPNFV Cloud based on the +configurations provided in no-ha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml. +This yaml file contains following configurations and is passed as an +argument to deploy.py script + +* ``scenario.yaml:`` This configuration file defines translation between a + short deployment scenario name(os-nosdn-kvm_ovs_dpdk-noha) and an actual deployment + scenario configuration file(no-ha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml) + +* ``deployment-scenario-metadata:`` Contains the configuration metadata like + title,version,created,comment. + +.. code:: bash + + deployment-scenario-metadata: + title: NFV KVM and OVS-DPDK NOHA deployment + version: 0.0.1 + created: Dec 20 2016 + comment: NFV KVM and OVS-DPDK + +* ``stack-extensions:`` Stack extentions are opnfv added value features in form + of a fuel-plugin.Plugins listed in stack extensions are enabled and + configured. os-nosdn-kvm_ovs_dpdk-noha scenario currently uses KVM-1.0.0 plugin. + +.. code:: bash + + stack-extensions: + - module: fuel-plugin-kvm + module-config-name: fuel-nfvkvm + module-config-version: 1.0.0 + module-config-override: + # Module config overrides + +* ``dea-override-config:`` Used to configure the NO-HA mode,network segmentation + types and role to node assignments.These configurations overrides + corresponding keys in the dea_base.yaml and dea_pod_override.yaml. + These keys are used to deploy multiple nodes(``1 controller,3 computes``) + as mention below. + + * **Node 1**: + - This node has MongoDB and Controller roles + - The controller node runs the Identity service, Image Service, management portions of + Compute and Networking, Networking plug-in and the dashboard + - Uses VLAN as an interface + + * **Node 2**: + - This node has compute and Ceph-osd roles + - Ceph is a massively scalable, open source, distributed storage system + - By default, Compute uses KVM as the hypervisor + - Uses DPDK as an interface + + * **Node 3**: + - This node has compute and Ceph-osd roles + - Ceph is a massively scalable, open source, distributed storage system + - By default, Compute uses KVM as the hypervisor + - Uses DPDK as an interface + + * **Node 4**: + - This node has compute and Ceph-osd roles + - Ceph is a massively scalable, open source, distributed storage system + - By default, Compute uses KVM as the hypervisor + - Uses DPDK as an interface + + The below is the ``dea-override-config`` of the no-ha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml file. + +.. code:: bash + + dea-override-config: + fuel: + FEATURE_GROUPS: + - experimental + environment: + net_segment_type: vlan + nodes: + - id: 1 + interfaces: interfaces_vlan + role: mongo,controller + - id: 2 + interfaces: interfaces_dpdk + role: ceph-osd,compute + attributes: attributes_1 + - id: 3 + interfaces: interfaces_dpdk + role: ceph-osd,compute + attributes: attributes_1 + - id: 4 + interfaces: interfaces_dpdk + role: ceph-osd,compute + attributes: attributes_1 + + attributes_1: + hugepages: + dpdk: + value: 1024 + nova: + value: + '2048': 1024 + + network: + networking_parameters: + segmentation_type: vlan + networks: + - cidr: null + gateway: null + ip_ranges: [] + meta: + configurable: false + map_priority: 2 + name: private + neutron_vlan_range: true + notation: null + render_addr_mask: null + render_type: null + seg_type: vlan + use_gateway: false + vlan_start: null + name: private + vlan_start: null + + settings: + editable: + storage: + ephemeral_ceph: + description: Configures Nova to store ephemeral volumes in RBD. This works best if Ceph is enabled for volumes and images, too. Enables live migration of all types of Ceph backed VMs (without this option, live migration will only work with VMs launched from Cinder volumes). + label: Ceph RBD for ephemeral volumes (Nova) + type: checkbox + value: true + weight: 75 + images_ceph: + description: Configures Glance to use the Ceph RBD backend to store images. If enabled, this option will prevent Swift from installing. + label: Ceph RBD for images (Glance) + restrictions: + - settings:storage.images_vcenter.value == true: Only one Glance backend could be selected. + type: checkbox + value: true + weight: 30 + +* ``dha-override-config:`` Provides information about the VM definition and + Network config for virtual deployment.These configurations overrides + the pod dha definition and points to the controller,compute and + fuel definition files. The no-ha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml + has no dha-config changes i.e., default configuration is used. + +* os-nosdn-kvm_ovs_dpdk-noha scenario is successful when all the 4 Nodes are accessible, + up and running. + + + +**Note:** + +* In os-nosdn-kvm_ovs_dpdk-noha scenario, OVS is installed on the compute nodes with DPDK configured + +* Hugepages for DPDK are configured in the attributes_1 section of the no-ha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml + +* Hugepages are only configured for compute nodes + +* This results in faster communication and data transfer among the compute nodes + + +Scenario Usage Overview +----------------------- + +.. Provide a brief overview on how to use the scenario and the features available to the +.. user. This should be an "introduction" to the userguide document, and explicitly link to it, +.. where the specifics of the features are covered including examples and API's + +* The high availability feature is disabled and deploymet is done by deploy.py with + noha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml as an argument. +* Install Fuel Master and deploy OPNFV Cloud from scratch on Hardware + Environment: + + +Command to deploy the os-nosdn-kvm_ovs_dpdk-noha scenario: + +.. code:: bash + + $ cd ~/fuel/ci/ + $ sudo ./deploy.sh -f -b file:///tmp/opnfv-fuel/deploy/config -l devel-pipeline -p default -s no-ha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml -i file:///tmp/opnfv.iso + +where, + -b is used to specify the configuration directory + + -i is used to specify the image downloaded from artifacts. + +**Note:** + +.. code:: bash + + Check $ sudo ./deploy.sh -h for further information. + +* os-nosdn-kvm_ovs_dpdk-noha scenario can be executed from the jenkins project + "fuel-os-nosdn-kvm_ovs_dpdk-noha-baremetal-daily-master" +* This scenario provides the No High Availability feature by deploying + 1 controller,3 compute nodes and checking if all the 4 nodes + are accessible(IP,up & running). +* Test Scenario is passed if deployment is successful and all 4 nodes have + accessibility (IP , up & running). + +Known Limitations, Issues and Workarounds +----------------------------------------- +.. Explain any known limitations here. + +* Test scenario os-nosdn-kvm_ovs_dpdk-noha result is not stable. + +References +---------- + +For more information on the OPNFV Danube release, please visit +http://www.opnfv.org/Danube diff --git a/docs/release/scenarios/os-nosdn-kvm_nfv_ovs_dpdk_bar-ha/index.rst b/docs/release/scenarios/os-nosdn-kvm_nfv_ovs_dpdk_bar-ha/index.rst new file mode 100755 index 000000000..a8192edcc --- /dev/null +++ b/docs/release/scenarios/os-nosdn-kvm_nfv_ovs_dpdk_bar-ha/index.rst @@ -0,0 +1,14 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. +.. http://creativecommons.org/licenses/by/4.0 + +.. _kvmfornfv-os-nosdn-kvm_nfv_ovs_dpdk_bar-ha: + +********************************************************* +os-nosdn-kvm_nfv_ovs_dpdk_bar-ha Overview and Description +********************************************************* + +.. toctree:: + :numbered: + :maxdepth: 3 + + ./os-nosdn-kvm_nfv_ovs_dpdk_bar-ha.description.rst diff --git a/docs/release/scenarios/os-nosdn-kvm_nfv_ovs_dpdk_bar-ha/os-nosdn-kvm_nfv_ovs_dpdk_bar-ha.description.rst b/docs/release/scenarios/os-nosdn-kvm_nfv_ovs_dpdk_bar-ha/os-nosdn-kvm_nfv_ovs_dpdk_bar-ha.description.rst new file mode 100644 index 000000000..0ab20514a --- /dev/null +++ b/docs/release/scenarios/os-nosdn-kvm_nfv_ovs_dpdk_bar-ha/os-nosdn-kvm_nfv_ovs_dpdk_bar-ha.description.rst @@ -0,0 +1,257 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. + +.. http://creativecommons.org/licenses/by/4.0 + +============================================ +os-nosdn-kvm_nfv_ovs_dpdk_bar-ha Description +============================================ + +Introduction +------------ + +.. In this section explain the purpose of the scenario and the + types of capabilities provided + +The purpose of os-nosdn-kvm_ovs_dpdk_bar-ha scenario testing is to test the +High Availability deployment and configuration of OPNFV software suite +with OpenStack and without SDN software. This OPNFV software suite +includes OPNFV KVM4NFV latest software packages for Linux Kernel and +QEMU patches for achieving low latency. High Availability feature is achieved +by deploying OpenStack multi-node setup with 3 controllers and 2 computes nodes. + +OPNFV Barometer packages is used for traffic,performance and platform monitoring. +KVM4NFV packages will be installed on compute nodes as part of deployment. +This scenario testcase deployment is happening on multi-node by using OPNFV Fuel deployer. + +Scenario Components and Composition +----------------------------------- +.. In this section describe the unique components that make up the scenario, +.. what each component provides and why it has been included in order +.. to communicate to the user the capabilities available in this scenario. + +This scenario deploys the High Availability OPNFV Cloud based on the +configurations provided in ha_nfv-kvm_nfv-ovs-dpdk-bar_heat_ceilometer_scenario.yaml. +This yaml file contains following configurations and is passed as an +argument to deploy.py script + +* ``scenario.yaml:`` This configuration file defines translation between a + short deployment scenario name(os-nosdn-kvm_ovs_dpdk_bar-ha) and an actual deployment + scenario configuration file(ha_nfv-kvm_nfv-ovs-dpdk-bar_heat_ceilometer_scenario.yaml) + +* ``deployment-scenario-metadata:`` Contains the configuration metadata like + title,version,created,comment. + +.. code:: bash + + deployment-scenario-metadata: + title: NFV KVM and OVS-DPDK HA deployment + version: 0.0.1 + created: Dec 20 2016 + comment: NFV KVM and OVS-DPDK + +* ``stack-extensions:`` Stack extentions are opnfv added value features in form + of a fuel-plugin.Plugins listed in stack extensions are enabled and + configured. os-nosdn-kvm_ovs_dpdk_bar-ha scenario currently uses KVM-1.0.0 plugin and barometer plugin. + +.. code:: bash + + stack-extensions: + - module: fuel-plugin-kvm + module-config-name: fuel-nfvkvm + module-config-version: 1.0.0 + module-config-override: + # Module config overrides + - module: fuel-plugin-collectd-ceilometer + module-config-name: fuel-barometer + module-config-version: 1.0.0 + module-config-override: + # Module config overrides + + +* ``dea-override-config:`` Used to configure the HA mode,network segmentation + types and role to node assignments.These configurations overrides + corresponding keys in the dea_base.yaml and dea_pod_override.yaml. + These keys are used to deploy multiple nodes(``3 controllers,2 computes``) + as mention below. + + * **Node 1**: + - This node has MongoDB and Controller roles + - The controller node runs the Identity service, Image Service, management portions of + Compute and Networking, Networking plug-in and the dashboard + - Uses VLAN as an interface + + * **Node 2**: + - This node has Ceph-osd and Controller roles + - The controller node runs the Identity service, Image Service, management portions of + Compute and Networking, Networking plug-in and the dashboard + - Ceph is a massively scalable, open source, distributed storage system + - Uses VLAN as an interface + + * **Node 3**: + - This node has Controller role in order to achieve high availability. + - Uses VLAN as an interface + + * **Node 4**: + - This node has compute and Ceph-osd roles + - Ceph is a massively scalable, open source, distributed storage system + - By default, Compute uses KVM as the hypervisor + - Uses DPDK as an interface + + * **Node 5**: + - This node has compute and Ceph-osd roles + - Ceph is a massively scalable, open source, distributed storage system + - By default, Compute uses KVM as the hypervisor + - Uses DPDK as an interface + + The below is the ``dea-override-config`` of the ha_nfv-kvm_nfv-ovs-dpdk-bar_heat_ceilometer_scenario.yaml file. + +.. code:: bash + + dea-override-config: + fuel: + FEATURE_GROUPS: + - experimental + nodes: + - id: 1 + interfaces: interfaces_1 + role: controller + - id: 2 + interfaces: interfaces_1 + role: mongo,controller + - id: 3 + interfaces: interfaces_1 + role: ceph-osd,controller + - id: 4 + interfaces: interfaces_dpdk + role: ceph-osd,compute + attributes: attributes_1 + - id: 5 + interfaces: interfaces_dpdk + role: ceph-osd,compute + attributes: attributes_1 + + attributes_1: + hugepages: + dpdk: + value: 1024 + nova: + value: + '2048': 1024 + + settings: + editable: + storage: + ephemeral_ceph: + description: Configures Nova to store ephemeral volumes in RBD. This works best if Ceph is enabled for volumes and images, too. Enables live migration of all types of Ceph backed VMs (without this option, live migration will only work with VMs launched from Cinder volumes). + label: Ceph RBD for ephemeral volumes (Nova) + type: checkbox + value: true + weight: 75 + images_ceph: + description: Configures Glance to use the Ceph RBD backend to store images. If enabled, this option will prevent Swift from installing. + label: Ceph RBD for images (Glance) + restrictions: + - settings:storage.images_vcenter.value == true: Only one Glance backend could be selected. + type: checkbox + value: true + weight: 30 + +* ``dha-override-config:`` Provides information about the VM definition and + Network config for virtual deployment.These configurations overrides + the pod dha definition and points to the controller,compute and + fuel definition files. + + The below is the ``dha-override-config`` of the ha_nfv-kvm_nfv-ovs-dpdk-bar_heat_ceilometer_scenario.yaml file. + +.. code:: bash + + dha-override-config: + nodes: + - id: 1 + libvirtName: controller1 + libvirtTemplate: templates/virtual_environment/vms/controller.xml + - id: 2 + libvirtName: controller2 + libvirtTemplate: templates/virtual_environment/vms/controller.xml + - id: 3 + libvirtName: controller3 + libvirtTemplate: templates/virtual_environment/vms/controller.xml + - id: 4 + libvirtName: compute1 + libvirtTemplate: templates/virtual_environment/vms/compute.xml + - id: 5 + libvirtName: compute2 + libvirtTemplate: templates/virtual_environment/vms/compute.xml + - id: 6 + libvirtName: fuel-master + libvirtTemplate: templates/virtual_environment/vms/fuel.xml + isFuel: yes + username: root + password: r00tme + + +* os-nosdn-kvm_ovs_dpdk_bar-ha scenario is successful when all the 5 Nodes are accessible, up and running. + + +**Note:** + +* In os-nosdn-kvm_ovs_dpdk_bar-ha scenario, OVS is installed on the compute nodes with DPDK configured + +* Baraometer plugin is also implemented along with KVM plugin + +* Hugepages for DPDK are configured in the attributes_1 section of the no-ha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml + +* Hugepages are only configured for compute nodes + +* This results in faster communication and data transfer among the compute nodes + + +Scenario Usage Overview +------------------------ +.. Provide a brief overview on how to use the scenario and the features available to the +.. user. This should be an "introduction" to the userguide document, and explicitly link to it, +.. where the specifics of the features are covered including examples and API's + +* The high availability feature can be acheived by executing deploy.py with + ha_nfv-kvm_nfv-ovs-dpdk-bar_heat_ceilometer_scenario.yaml as an argument. +* Install Fuel Master and deploy OPNFV Cloud from scratch on Hardware + Environment: + + +Command to deploy the os-nosdn-kvm_ovs_dpdk_bar-ha scenario: + +.. code:: bash + + $ cd ~/fuel/ci/ + $ sudo ./deploy.sh -f -b file:///tmp/opnfv-fuel/deploy/config -l devel-pipeline -p default -s ha_nfv-kvm_nfv-ovs-dpdk-bar_heat_ceilometer_scenario.yaml -i file:///tmp/opnfv.iso + +where, + -b is used to specify the configuration directory + + -i is used to specify the image downloaded from artifacts. + +**Note:** + +.. code:: bash + + Check $ sudo ./deploy.sh -h for further information. + +* os-nosdn-kvm_ovs_dpdk_bar-ha scenario can be executed from the jenkins project + "fuel-os-nosdn-kvm_ovs_dpdk_bar-ha-baremetal-daily-master" +* This scenario provides the High Availability feature by deploying + 3 controller,2 compute nodes and checking if all the 5 nodes + are accessible(IP,up & running). +* Test Scenario is passed if deployment is successful and all 5 nodes have + accessibility (IP , up & running). + +Known Limitations, Issues and Workarounds +----------------------------------------- +.. Explain any known limitations here. + +* Test scenario os-nosdn-kvm_ovs_dpdk_bar-ha result is not stable. + +References +---------- + +For more information on the OPNFV Danube release, please visit +http://www.opnfv.org/Danube diff --git a/docs/release/scenarios/os-nosdn-kvm_nfv_ovs_dpdk_bar-noha/index.rst b/docs/release/scenarios/os-nosdn-kvm_nfv_ovs_dpdk_bar-noha/index.rst new file mode 100755 index 000000000..3a07e98c9 --- /dev/null +++ b/docs/release/scenarios/os-nosdn-kvm_nfv_ovs_dpdk_bar-noha/index.rst @@ -0,0 +1,14 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. +.. http://creativecommons.org/licenses/by/4.0 + +.. _kvmfornfv-os-nosdn-kvm_nfv_ovs_dpdk_bar-noha: + +*********************************************************** +os-nosdn-kvm_nfv_ovs_dpdk_bar-noha Overview and Description +*********************************************************** + +.. toctree:: + :numbered: + :maxdepth: 3 + + ./os-nosdn-kvm_nfv_ovs_dpdk_bar-noha.description.rst diff --git a/docs/release/scenarios/os-nosdn-kvm_nfv_ovs_dpdk_bar-noha/os-nosdn-kvm_nfv_ovs_dpdk_bar-noha.description.rst b/docs/release/scenarios/os-nosdn-kvm_nfv_ovs_dpdk_bar-noha/os-nosdn-kvm_nfv_ovs_dpdk_bar-noha.description.rst new file mode 100644 index 000000000..47a7f1034 --- /dev/null +++ b/docs/release/scenarios/os-nosdn-kvm_nfv_ovs_dpdk_bar-noha/os-nosdn-kvm_nfv_ovs_dpdk_bar-noha.description.rst @@ -0,0 +1,244 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. + +.. http://creativecommons.org/licenses/by/4.0 + +============================================ +os-nosdn-kvm_nfv_ovs_dpdk_bar-ha Description +============================================ + +Introduction +------------- + +.. In this section explain the purpose of the scenario and the + types of capabilities provided + +The purpose of os-nosdn-kvm_ovs_dpdk_bar-noha scenario testing is to test the +No High Availability deployment and configuration of OPNFV software suite +with OpenStack and without SDN software. This OPNFV software suite +includes OPNFV KVM4NFV latest software packages for Linux Kernel and +QEMU patches for achieving low latency.No High Availability feature is achieved +by deploying OpenStack multi-node setup with 1 controller and 3 computes nodes. + +OPNFV Barometer packages is used for traffic,performance and platform monitoring. +KVM4NFV packages will be installed on compute nodes as part of deployment. +This scenario testcase deployment is happening on multi-node by using OPNFV Fuel deployer. + +Scenario Components and Composition +------------------------------------ +.. In this section describe the unique components that make up the scenario, +.. what each component provides and why it has been included in order +.. to communicate to the user the capabilities available in this scenario. + +This scenario deploys the No High Availability OPNFV Cloud based on the +configurations provided in no-ha_nfv-kvm_nfv-ovs-dpdk-bar_heat_ceilometer_scenario.yaml. +This yaml file contains following configurations and is passed as an +argument to deploy.py script + +* ``scenario.yaml:`` This configuration file defines translation between a + short deployment scenario name(os-nosdn-kvm_ovs_dpdk_bar-noha) and an actual deployment + scenario configuration file(no-ha_nfv-kvm_nfv-ovs-dpdk-bar_heat_ceilometer_scenario.yaml) + +* ``deployment-scenario-metadata:`` Contains the configuration metadata like + title,version,created,comment. + +.. code:: bash + + deployment-scenario-metadata: + title: NFV KVM and OVS-DPDK HA deployment + version: 0.0.1 + created: Dec 20 2016 + comment: NFV KVM and OVS-DPDK + +* ``stack-extensions:`` Stack extentions are opnfv added value features in form + of a fuel-plugin.Plugins listed in stack extensions are enabled and + configured. os-nosdn-kvm_ovs_dpdk_bar-noha scenario currently uses KVM-1.0.0 plugin and barometer-1.0.0 plugin. + +.. code:: bash + + stack-extensions: + - module: fuel-plugin-kvm + module-config-name: fuel-nfvkvm + module-config-version: 1.0.0 + module-config-override: + # Module config overrides + - module: fuel-plugin-collectd-ceilometer + module-config-name: fuel-barometer + module-config-version: 1.0.0 + module-config-override: + # Module config overrides + +* ``dea-override-config:`` Used to configure the HA mode,network segmentation + types and role to node assignments.These configurations overrides + corresponding keys in the dea_base.yaml and dea_pod_override.yaml. + These keys are used to deploy multiple nodes(``1 controller,3 computes``) + as mention below. + + * **Node 1**: + - This node has MongoDB and Controller roles + - The controller node runs the Identity service, Image Service, management portions of + Compute and Networking, Networking plug-in and the dashboard + - Uses VLAN as an interface + + * **Node 2**: + - This node has compute and Ceph-osd roles + - Ceph is a massively scalable, open source, distributed storage system + - By default, Compute uses KVM as the hypervisor + - Uses DPDK as an interface + + * **Node 3**: + - This node has compute and Ceph-osd roles + - Ceph is a massively scalable, open source, distributed storage system + - By default, Compute uses KVM as the hypervisor + - Uses DPDK as an interface + + * **Node 4**: + - This node has compute and Ceph-osd roles + - Ceph is a massively scalable, open source, distributed storage system + - By default, Compute uses KVM as the hypervisor + - Uses DPDK as an interface + + The below is the ``dea-override-config`` of the no-ha_nfv-kvm_nfv-ovs-dpdk-bar_heat_ceilometer_scenario.yaml file. + +.. code:: bash + + dea-override-config: + fuel: + FEATURE_GROUPS: + - experimental + environment: + net_segment_type: vlan + nodes: + - id: 1 + interfaces: interfaces_vlan + role: mongo,controller + - id: 2 + interfaces: interfaces_dpdk + role: ceph-osd,compute + attributes: attributes_1 + - id: 3 + interfaces: interfaces_dpdk + role: ceph-osd,compute + attributes: attributes_1 + - id: 4 + interfaces: interfaces_dpdk + role: ceph-osd,compute + attributes: attributes_1 + + attributes_1: + hugepages: + dpdk: + value: 1024 + nova: + value: + '2048': 1024 + + network: + networking_parameters: + segmentation_type: vlan + networks: + - cidr: null + gateway: null + ip_ranges: [] + meta: + configurable: false + map_priority: 2 + name: private + neutron_vlan_range: true + notation: null + render_addr_mask: null + render_type: null + seg_type: vlan + use_gateway: false + vlan_start: null + name: private + vlan_start: null + + settings: + editable: + storage: + ephemeral_ceph: + description: Configures Nova to store ephemeral volumes in RBD. This works best if Ceph is enabled for volumes and images, too. Enables live migration of all types of Ceph backed VMs (without this option, live migration will only work with VMs launched from Cinder volumes). + label: Ceph RBD for ephemeral volumes (Nova) + type: checkbox + value: true + weight: 75 + images_ceph: + description: Configures Glance to use the Ceph RBD backend to store images. If enabled, this option will prevent Swift from installing. + label: Ceph RBD for images (Glance) + restrictions: + - settings:storage.images_vcenter.value == true: Only one Glance backend could be selected. + type: checkbox + value: true + weight: 30 + +* ``dha-override-config:`` Provides information about the VM definition and + Network config for virtual deployment.These configurations overrides + the pod dha definition and points to the controller,compute and + fuel definition files. The noha_nfv-kvm_nfv-ovs-dpdk-bar_heat_ceilometer_scenario.yaml has no dha-config changes i.e., default configuration is used. + +* os-nosdn-kvm_ovs_dpdk_bar-noha scenario is successful when all the 4 Nodes are accessible, + up and running. + + + +**Note:** + +* In os-nosdn-kvm_ovs_dpdk_bar-noha scenario, OVS is installed on the compute nodes with DPDK configured + +* Baraometer plugin is also implemented along with KVM plugin. + +* Hugepages for DPDK are configured in the attributes_1 section of the no-ha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml + +* Hugepages are only configured for compute nodes + +* This results in faster communication and data transfer among the compute nodes + +Scenario Usage Overview +----------------------- +.. Provide a brief overview on how to use the scenario and the features available to the +.. user. This should be an "introduction" to the userguide document, and explicitly link to it, +.. where the specifics of the features are covered including examples and API's + +* The high availability feature is disabled and deploymet is done by deploy.py with + noha_nfv-kvm_nfv-ovs-dpdk-bar_heat_ceilometer_scenario.yaml as an argument. +* Install Fuel Master and deploy OPNFV Cloud from scratch on Hardware + Environment: + + +Command to deploy the os-nosdn-kvm_ovs_dpdk_bar-noha scenario: + +.. code:: bash + + $ cd ~/fuel/ci/ + $ sudo ./deploy.sh -f -b file:///tmp/opnfv-fuel/deploy/config -l devel-pipeline -p default -s no-ha_nfv-kvm_nfv-ovs-dpdk-bar_heat_ceilometer_scenario.yaml -i file:///tmp/opnfv.iso + +where, + -b is used to specify the configuration directory + + -i is used to specify the image downloaded from artifacts. + +Note: + +.. code:: bash + + Check $ sudo ./deploy.sh -h for further information. + +* os-nosdn-kvm_ovs_dpdk_bar-noha scenario can be executed from the jenkins project + "fuel-os-nosdn-kvm_ovs_dpdk_bar-noha-baremetal-daily-master" +* This scenario provides the No High Availability feature by deploying + 1 controller,3 compute nodes and checking if all the 4 nodes + are accessible(IP,up & running). +* Test Scenario is passed if deployment is successful and all 4 nodes have + accessibility (IP , up & running). + +Known Limitations, Issues and Workarounds +----------------------------------------- +.. Explain any known limitations here. + +* Test scenario os-nosdn-kvm_ovs_dpdk_bar-noha result is not stable. + +References +---------- + +For more information on the OPNFV Danube release, please visit +http://www.opnfv.org/Danube diff --git a/docs/release/userguide/Ftrace.debugging.tool.userguide.rst b/docs/release/userguide/Ftrace.debugging.tool.userguide.rst new file mode 100644 index 000000000..95b7f8fe5 --- /dev/null +++ b/docs/release/userguide/Ftrace.debugging.tool.userguide.rst @@ -0,0 +1,259 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. + +.. http://creativecommons.org/licenses/by/4.0 + +===================== +FTrace Debugging Tool +===================== + +About Ftrace +------------- +Ftrace is an internal tracer designed to find what is going on inside the kernel. It can be used +for debugging or analyzing latencies and performance related issues that take place outside of +user-space. Although ftrace is typically considered the function tracer, it is really a frame +work of several assorted tracing utilities. + + One of the most common uses of ftrace is the event tracing. + +**Note:** +- For KVM4NFV, Ftrace is preferred as it is in-built kernel tool +- More stable compared to other debugging tools + +Version Features +---------------- + ++-----------------------------+-----------------------------------------------+ +| | | +| **Release** | **Features** | +| | | ++=============================+===============================================+ +| | - Ftrace Debugging tool is not implemented in | +| Colorado | Colorado release of KVM4NFV | +| | | ++-----------------------------+-----------------------------------------------+ +| | - Ftrace aids in debugging the KVM4NFV | +| Danube | 4.4-linux-kernel level issues | +| | - Option to disable if not required | ++-----------------------------+-----------------------------------------------+ + + +Implementation of Ftrace +------------------------- +Ftrace uses the debugfs file system to hold the control files as +well as the files to display output. + +When debugfs is configured into the kernel (which selecting any ftrace +option will do) the directory /sys/kernel/debug will be created. To mount +this directory, you can add to your /etc/fstab file: + +.. code:: bash + + debugfs /sys/kernel/debug debugfs defaults 0 0 + +Or you can mount it at run time with: + +.. code:: bash + + mount -t debugfs nodev /sys/kernel/debug + +Some configurations for Ftrace are used for other purposes, like finding latency or analyzing the system. For the purpose of debugging, the kernel configuration parameters that should be enabled are: + +.. code:: bash + + CONFIG_FUNCTION_TRACER=y + CONFIG_FUNCTION_GRAPH_TRACER=y + CONFIG_STACK_TRACER=y + CONFIG_DYNAMIC_FTRACE=y + +The above parameters must be enabled in /boot/config-4.4.0-el7.x86_64 i.e., kernel config file for ftrace to work. If not enabled, change the parameter to ``y`` and run., + +.. code:: bash + + On CentOS + grub2-mkconfig -o /boot/grub2/grub.cfg + sudo reboot + +Re-check the parameters after reboot before running ftrace. + +Files in Ftrace: +---------------- +The below is a list of few major files in Ftrace. + + ``current_tracer:`` + + This is used to set or display the current tracer that is configured. + + ``available_tracers:`` + + This holds the different types of tracers that have been compiled into the kernel. The tracers listed here can be configured by echoing their name into current_tracer. + + ``tracing_on:`` + + This sets or displays whether writing to the tracering buffer is enabled. Echo 0 into this file to disable the tracer or 1 to enable it. + + ``trace:`` + + This file holds the output of the trace in a human readable format. + + ``tracing_cpumask:`` + + This is a mask that lets the user only trace on specified CPUs. The format is a hex string representing the CPUs. + + ``events:`` + + It holds event tracepoints (also known as static tracepoints) that have been compiled into the kernel. It shows what event tracepoints exist and how they are grouped by system. + + +Avaliable Tracers +----------------- + +Here is the list of current tracers that may be configured based on usage. + +- function +- function_graph +- irqsoff +- preemptoff +- preemptirqsoff +- wakeup +- wakeup_rt + +Brief about a few: + + ``function:`` + + Function call tracer to trace all kernel functions. + + ``function_graph:`` + + Similar to the function tracer except that the function tracer probes the functions on their entry whereas the function graph tracer traces on both entry and exit of the functions. + + ``nop:`` + + This is the "trace nothing" tracer. To remove tracers from tracing simply echo "nop" into current_tracer. + +Examples: + +.. code:: bash + + + To list available tracers: + [tracing]# cat available_tracers + function_graph function wakeup wakeup_rt preemptoff irqsoff preemptirqsoff nop + + Usage: + [tracing]# echo function > current_tracer + [tracing]# cat current_tracer + function + + To view output: + [tracing]# cat trace | head -10 + + To Stop tracing: + [tracing]# echo 0 > tracing_on + + To Start/restart tracing: + [tracing]# echo 1 > tracing_on; + + +Ftrace in KVM4NFV +----------------- +Ftrace is part of KVM4NFV D-Release. KVM4NFV built 4.4-linux-Kernel will be tested by +executing cyclictest and analyzing the results/latency values (max, min, avg) generated. +Ftrace (or) function tracer is a stable kernel inbuilt debugging tool which tests real time +kernel and outputs a log as part of the code. These output logs are useful in following ways. + + - Kernel Debugging. + - Helps in Kernel code optimization and + - Can be used to better understand the kernel level code flow + +Ftrace logs for KVM4NFV can be found `here`_: + + +.. _here: http://artifacts.opnfv.org/kvmfornfv.html + +Ftrace Usage in KVM4NFV Kernel Debugging: +----------------------------------------- +Kvm4nfv has two scripts in /ci/envs to provide ftrace tool: + + - enable_trace.sh + - disable_trace.sh + +.. code:: bash + + Found at., + $ cd kvmfornfv/ci/envs + +Enabling Ftrace in KVM4NFV +-------------------------- + +The enable_trace.sh script is triggered by changing ftrace_enable value in test_kvmfornfv.sh +script to 1 (which is zero by default). Change as below to enable Ftrace. + +.. code:: bash + + ftrace_enable=1 + +Note: + +- Ftrace is enabled before + +Details of enable_trace script +------------------------------ + +- CPU coremask is calculated using getcpumask() +- All the required events are enabled by, + echoing "1" to $TRACEDIR/events/event_name/enable file + +Example, + +.. code:: bash + + $TRACEDIR = /sys/kernel/debug/tracing/ + sudo bash -c "echo 1 > $TRACEDIR/events/irq/enable" + sudo bash -c "echo 1 > $TRACEDIR/events/task/enable" + sudo bash -c "echo 1 > $TRACEDIR/events/syscalls/enable" + +The set_event file contains all the enabled events list + +- Function tracer is selected. May be changed to other avaliable tracers based on requirement + +.. code:: bash + + sudo bash -c "echo function > $TRACEDIR/current_tracer + +- When tracing is turned ON by setting ``tracing_on=1``, the ``trace`` file keeps getting append with the traced data until ``tracing_on=0`` and then ftrace_buffer gets cleared. + +.. code:: bash + + To Stop/Pause, + echo 0 >tracing_on; + + To Start/Restart, + echo 1 >tracing_on; + +- Once tracing is diabled, disable_trace.sh script is triggered. + +Details of disable_trace Script +------------------------------- +In disable trace script the following are done: + +- The trace file is copied and moved to /tmp folder based on timestamp +- The current tracer file is set to ``nop`` +- The set_event file is cleared i.e., all the enabled events are disabled +- Kernel Ftrace is disabled/unmounted + + +Publishing Ftrace logs: +----------------------- +The generated trace log is pushed to `artifacts`_ by kvmfornfv-upload-artifact.sh +script available in releng which will be triggered as a part of kvm4nfv daily job. +The `trigger`_ in the script is., + +.. code:: bash + + echo "Uploading artifacts for future debugging needs...." + gsutil cp -r $WORKSPACE/build_output/log-*.tar.gz $GS_LOG_LOCATION > $WORKSPACE/gsutil.log 2>&1 + +.. _artifacts: https://artifacts.opnfv.org/ + +.. _trigger: https://gerrit.opnfv.org/gerrit/gitweb?p=releng.git;a=blob;f=jjb/kvmfornfv/kvmfornfv-upload-artifact.sh;h=56fb4f9c18a83c689a916dc6c85f9e3ddf2479b2;hb=HEAD#l53 diff --git a/docs/release/userguide/abstract.rst b/docs/release/userguide/abstract.rst new file mode 100644 index 000000000..ec05b2560 --- /dev/null +++ b/docs/release/userguide/abstract.rst @@ -0,0 +1,16 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. + +.. http://creativecommons.org/licenses/by/4.0 + +================== +Userguide Abstract +================== + +In KVM4NFV project, we focus on the KVM hypervisor to enhance it for NFV, +by looking at the following areas initially- + +* Minimal Interrupt latency variation for data plane VNFs: + * Minimal Timing Variation for Timing correctness of real-time VNFs + * Minimal packet latency variation for data-plane VNFs +* Inter-VM communication +* Fast live migration diff --git a/docs/release/userguide/common.platform.render.rst b/docs/release/userguide/common.platform.render.rst new file mode 100644 index 000000000..46b4707a3 --- /dev/null +++ b/docs/release/userguide/common.platform.render.rst @@ -0,0 +1,15 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. + +.. http://creativecommons.org/licenses/by/4.0 + +================================ +Using common platform components +================================ + +This section outlines basic usage principals and methods for some of the +commonly deployed components of supported OPNFV scenario's in Danube. +The subsections provide an outline of how these components are commonly +used and how to address them in an OPNFV deployment.The components derive +from autonomous upstream communities and where possible this guide will +provide direction to the relevant documentation made available by those +communities to better help you navigate the OPNFV deployment. diff --git a/docs/release/userguide/feature.userguide.render.rst b/docs/release/userguide/feature.userguide.render.rst new file mode 100644 index 000000000..3bed21fc9 --- /dev/null +++ b/docs/release/userguide/feature.userguide.render.rst @@ -0,0 +1,14 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. + +.. http://creativecommons.org/licenses/by/4.0 + +========================== +Using Danube Features +========================== + +The following sections of the user guide provide feature specific usage +guidelines and references for KVM4NFV project. + +* <project>/docs/userguide/low_latency.userguide.rst +* <project>/docs/userguide/live_migration.userguide.rst +* <project>/docs/userguide/tuning.userguide.rst diff --git a/docs/release/userguide/images/Cpustress-Idle.png b/docs/release/userguide/images/Cpustress-Idle.png Binary files differnew file mode 100644 index 000000000..b4b4e1112 --- /dev/null +++ b/docs/release/userguide/images/Cpustress-Idle.png diff --git a/docs/release/userguide/images/Dashboard-screenshot-1.png b/docs/release/userguide/images/Dashboard-screenshot-1.png Binary files differnew file mode 100644 index 000000000..7ff809697 --- /dev/null +++ b/docs/release/userguide/images/Dashboard-screenshot-1.png diff --git a/docs/release/userguide/images/Dashboard-screenshot-2.png b/docs/release/userguide/images/Dashboard-screenshot-2.png Binary files differnew file mode 100644 index 000000000..a5c4e01b5 --- /dev/null +++ b/docs/release/userguide/images/Dashboard-screenshot-2.png diff --git a/docs/release/userguide/images/Guest_Scenario.png b/docs/release/userguide/images/Guest_Scenario.png Binary files differnew file mode 100644 index 000000000..550c0fe6f --- /dev/null +++ b/docs/release/userguide/images/Guest_Scenario.png diff --git a/docs/release/userguide/images/Host_Scenario.png b/docs/release/userguide/images/Host_Scenario.png Binary files differnew file mode 100644 index 000000000..89789aa7b --- /dev/null +++ b/docs/release/userguide/images/Host_Scenario.png diff --git a/docs/release/userguide/images/IOstress-Idle.png b/docs/release/userguide/images/IOstress-Idle.png Binary files differnew file mode 100644 index 000000000..fe4e5fc81 --- /dev/null +++ b/docs/release/userguide/images/IOstress-Idle.png diff --git a/docs/release/userguide/images/IXIA1.png b/docs/release/userguide/images/IXIA1.png Binary files differnew file mode 100644 index 000000000..682de7c57 --- /dev/null +++ b/docs/release/userguide/images/IXIA1.png diff --git a/docs/release/userguide/images/Idle-Idle.png b/docs/release/userguide/images/Idle-Idle.png Binary files differnew file mode 100644 index 000000000..d619f65ea --- /dev/null +++ b/docs/release/userguide/images/Idle-Idle.png diff --git a/docs/release/userguide/images/Memorystress-Idle.png b/docs/release/userguide/images/Memorystress-Idle.png Binary files differnew file mode 100644 index 000000000..b9974a7a2 --- /dev/null +++ b/docs/release/userguide/images/Memorystress-Idle.png diff --git a/docs/release/userguide/images/SRIOV_Scenario.png b/docs/release/userguide/images/SRIOV_Scenario.png Binary files differnew file mode 100644 index 000000000..62e116ada --- /dev/null +++ b/docs/release/userguide/images/SRIOV_Scenario.png diff --git a/docs/release/userguide/images/UseCaseDashboard.png b/docs/release/userguide/images/UseCaseDashboard.png Binary files differnew file mode 100644 index 000000000..9dd14d26e --- /dev/null +++ b/docs/release/userguide/images/UseCaseDashboard.png diff --git a/docs/release/userguide/images/cpu-stress-idle-test-type.png b/docs/release/userguide/images/cpu-stress-idle-test-type.png Binary files differnew file mode 100644 index 000000000..9a5bdf6de --- /dev/null +++ b/docs/release/userguide/images/cpu-stress-idle-test-type.png diff --git a/docs/release/userguide/images/dashboard-architecture.png b/docs/release/userguide/images/dashboard-architecture.png Binary files differnew file mode 100644 index 000000000..821484e74 --- /dev/null +++ b/docs/release/userguide/images/dashboard-architecture.png diff --git a/docs/release/userguide/images/guest_pk_fw.png b/docs/release/userguide/images/guest_pk_fw.png Binary files differnew file mode 100644 index 000000000..5f80ecce5 --- /dev/null +++ b/docs/release/userguide/images/guest_pk_fw.png diff --git a/docs/release/userguide/images/host_pk_fw.png b/docs/release/userguide/images/host_pk_fw.png Binary files differnew file mode 100644 index 000000000..dcbe921f2 --- /dev/null +++ b/docs/release/userguide/images/host_pk_fw.png diff --git a/docs/release/userguide/images/idle-idle-test-type.png b/docs/release/userguide/images/idle-idle-test-type.png Binary files differnew file mode 100644 index 000000000..bd241bfe1 --- /dev/null +++ b/docs/release/userguide/images/idle-idle-test-type.png diff --git a/docs/release/userguide/images/io-stress-idle-test-type.png b/docs/release/userguide/images/io-stress-idle-test-type.png Binary files differnew file mode 100644 index 000000000..f79cb5984 --- /dev/null +++ b/docs/release/userguide/images/io-stress-idle-test-type.png diff --git a/docs/release/userguide/images/memory-stress-idle-test-type.png b/docs/release/userguide/images/memory-stress-idle-test-type.png Binary files differnew file mode 100644 index 000000000..1ca839a4a --- /dev/null +++ b/docs/release/userguide/images/memory-stress-idle-test-type.png diff --git a/docs/release/userguide/images/sriov_pk_fw.png b/docs/release/userguide/images/sriov_pk_fw.png Binary files differnew file mode 100644 index 000000000..bf7ad6f9b --- /dev/null +++ b/docs/release/userguide/images/sriov_pk_fw.png diff --git a/docs/release/userguide/index.rst b/docs/release/userguide/index.rst new file mode 100644 index 000000000..ff6898746 --- /dev/null +++ b/docs/release/userguide/index.rst @@ -0,0 +1,25 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. + +.. http://creativecommons.org/licenses/by/4.0 + +.. _kvmfornfv-userguide: + +****************** +KVM4NFV User Guide +****************** + +.. toctree:: + :maxdepth: 3 + + ./abstract.rst + ./introduction.rst + ./common.platform.render.rst + ./feature.userguide.render.rst + ./Ftrace.debugging.tool.userguide.rst + ./kvmfornfv.cyclictest-dashboard.userguide.rst + ./low_latency.userguide.rst + ./live_migration.userguide.rst + ./openstack.rst + ./packet_forwarding.userguide.rst + ./pcm_utility.userguide.rst + ./tuning.userguide.rst diff --git a/docs/release/userguide/introduction.rst b/docs/release/userguide/introduction.rst new file mode 100644 index 000000000..d4ee81143 --- /dev/null +++ b/docs/release/userguide/introduction.rst @@ -0,0 +1,88 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. + +.. http://creativecommons.org/licenses/by/4.0 + +====================== +Userguide Introduction +====================== + +Overview +-------- + +The project "NFV Hypervisors-KVM" makes collaborative efforts to enable NFV +features for existing hypervisors, which are not necessarily designed or +targeted to meet the requirements for the NFVI.The KVM4NFV scenario +consists of Continuous Integration builds, deployments and testing +combinations of virtual infrastructure components. + +KVM4NFV Features +---------------- + +Using this project, the following areas are targeted- + +* Minimal Interrupt latency variation for data plane VNFs: + * Minimal Timing Variation for Timing correctness of real-time VNFs + * Minimal packet latency variation for data-plane VNFs +* Inter-VM communication +* Fast live migration + +Some of the above items would require software development and/or specific +hardware features, and some need just configurations information for the +system (hardware, BIOS, OS, etc.). + +We include a requirements gathering stage as a formal part of the project. +For each subproject, we will start with an organized requirement stage so +that we can determine specific use cases (e.g. what kind of VMs should be +live migrated) and requirements (e.g. interrupt latency, jitters, Mpps, +migration-time, down-time, etc.) to set out the performance goals. + +Potential future projects would include: + +* Dynamic scaling (via scale-out) using VM instantiation +* Fast live migration for SR-IOV + +The user guide outlines how to work with key components and features in +the platform, each feature description section will indicate the scenarios +that provide the components and configurations required to use it. + +The configuration guide details which scenarios are best for you and how to +install and configure them. + +General usage guidelines +------------------------ + +The user guide for KVM4NFV features and capabilities provide step by step +instructions for using features that have been configured according to the +installation and configuration instructions. + +Scenarios User Guide +-------------------- + +The procedure to deploy/test `KVM4NFV scenarios`_ in a nested virtualization +or on bare-metal environment is mentioned in the below link. The kvm4nfv user guide can +be found at docs/scenarios + +.. code:: bash + + http://artifacts.opnfv.org/kvmfornfv/docs/index.html#kvmfornfv-scenarios-overview-and-description + +.. _KVM4NFV scenarios: http://artifacts.opnfv.org/kvmfornfv/docs/index.html#kvmfornfv-scenarios-overview-and-description + +The deployment has been verified for `os-nosdn-kvm-ha`_, os-nosdn-kvm-noha, `os-nosdn-kvm_ovs_dpdk-ha`_, +`os-nosdn-kvm_ovs_dpdk-noha`_ and `os-nosdn-kvm_ovs_dpdk_bar-ha`_, `os-nosdn-kvm_ovs_dpdk_bar-noha`_ test scenarios. + +For brief view of the above scenarios use: + +.. code:: bash + + http://artifacts.opnfv.org/kvmfornfv/docs/index.html#scenario-abstract + +.. _os-nosdn-kvm-ha: http://artifacts.opnfv.org/kvmfornfv/docs/index.html#kvmfornfv-scenarios-overview-and-description + +.. _os-nosdn-kvm_ovs_dpdk-ha: http://artifacts.opnfv.org/kvmfornfv/docs/index.html#os-nosdn-kvm-nfv-ovs-dpdk-ha-overview-and-description + +.. _os-nosdn-kvm_ovs_dpdk-noha: http://artifacts.opnfv.org/kvmfornfv/docs/index.html#os-nosdn-kvm-nfv-ovs-dpdk-noha-overview-and-description + +.. _os-nosdn-kvm_ovs_dpdk_bar-ha: http://artifacts.opnfv.org/kvmfornfv/docs/index.html#os-nosdn-kvm-nfv-ovs-dpdk_bar-ha-overview-and-description + +.. _os-nosdn-kvm_ovs_dpdk_bar-noha: http://artifacts.opnfv.org/kvmfornfv/docs/index.html#os-nosdn-kvm-nfv-ovs-dpdk_bar-noha-overview-and-description diff --git a/docs/release/userguide/kvmfornfv.cyclictest-dashboard.userguide.rst b/docs/release/userguide/kvmfornfv.cyclictest-dashboard.userguide.rst new file mode 100644 index 000000000..468f471e7 --- /dev/null +++ b/docs/release/userguide/kvmfornfv.cyclictest-dashboard.userguide.rst @@ -0,0 +1,318 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. + +.. http://creativecommons.org/licenses/by/4.0 + +======================= +KVM4NFV Dashboard Guide +======================= + +Dashboard for KVM4NFV Daily Test Results +---------------------------------------- + +Abstract +-------- + +This chapter explains the procedure to configure the InfluxDB and Grafana on Node1 or Node2 +depending on the testtype to publish KVM4NFV test results. The cyclictest cases are executed +and results are published on Yardstick Dashboard(Grafana). InfluxDB is the database which will +store the cyclictest results and Grafana is a visualisation suite to view the maximum,minimum and +average values of the time series data of cyclictest results.The framework is shown in below image. + +.. figure:: images/dashboard-architecture.png + :name: dashboard-architecture + :width: 100% + :align: center + +Version Features +---------------- + ++-----------------------------+--------------------------------------------+ +| | | +| **Release** | **Features** | +| | | ++=============================+============================================+ +| | - Data published in Json file format | +| Colorado | - No database support to store the test's | +| | latency values of cyclictest | +| | - For each run, the previous run's output | +| | file is replaced with a new file with | +| | currents latency values. | ++-----------------------------+--------------------------------------------+ +| | - Test results are stored in Influxdb | +| | - Graphical representation of the latency | +| Danube | values using Grafana suite. (Dashboard) | +| | - Supports graphical view for multiple | +| | testcases and test-types (Stress/Idle) | ++-----------------------------+--------------------------------------------+ + + +Installation Steps: +------------------- +To configure Yardstick, InfluxDB and Grafana for KVM4NFV project following sequence of steps are followed: + +**Note:** + +All the below steps are done as per the script, which is a part of CICD integration of kvmfornfv. + +.. code:: bash + + For Yardstick: + git clone https://gerrit.opnfv.org/gerrit/yardstick + + For InfluxDB: + docker pull tutum/influxdb + docker run -d --name influxdb -p 8083:8083 -p 8086:8086 --expose 8090 --expose 8099 tutum/influxdb + docker exec -it influxdb bash + $influx + >CREATE USER root WITH PASSWORD 'root' WITH ALL PRIVILEGES + >CREATE DATABASE yardstick; + >use yardstick; + >show MEASUREMENTS; + + For Grafana: + docker pull grafana/grafana + docker run -d --name grafana -p 3000:3000 grafana/grafana + +The Yardstick document for Grafana and InfluxDB configuration can be found `here`_. + +.. _here: https://wiki.opnfv.org/display/yardstick/How+to+deploy+InfluxDB+and+Grafana+locally + +Configuring the Dispatcher Type: +--------------------------------- +Need to configure the dispatcher type in /etc/yardstick/yardstick.conf depending on the dispatcher +methods which are used to store the cyclictest results. A sample yardstick.conf can be found at +/yardstick/etc/yardstick.conf.sample, which can be copied to /etc/yardstick. + +.. code:: bash + + mkdir -p /etc/yardstick/ + cp /yardstick/etc/yardstick.conf.sample /etc/yardstick/yardstick.conf + +**Dispatcher Modules:** + +Three type of dispatcher methods are available to store the cyclictest results. + +- File +- InfluxDB +- HTTP + +**1. File**: Default Dispatcher module is file. If the dispatcher module is configured as a file,then the test results are stored in a temporary file yardstick.out +( default path: /tmp/yardstick.out). +Dispatcher module of "Verify Job" is "Default". So,the results are stored in Yardstick.out file for verify job. +Storing all the verify jobs in InfluxDB database causes redundancy of latency values. Hence, a File output format is prefered. + +.. code:: bash + + [DEFAULT] + debug = False + dispatcher = file + + [dispatcher_file] + file_path = /tmp/yardstick.out + max_bytes = 0 + backup_count = 0 + +**2. Influxdb**: If the dispatcher module is configured as influxdb, then the test results are stored in Influxdb. +Users can check test resultsstored in the Influxdb(Database) on Grafana which is used to visualize the time series data. + +To configure the influxdb, the following content in /etc/yardstick/yardstick.conf need to updated + +.. code:: bash + + [DEFAULT] + debug = False + dispatcher = influxdb + + [dispatcher_influxdb] + timeout = 5 + target = http://127.0.0.1:8086 ##Mention the IP where influxdb is running + db_name = yardstick + username = root + password = root + +Dispatcher module of "Daily Job" is Influxdb. So, the results are stored in influxdb and then published to Dashboard. + +**3. HTTP**: If the dispatcher module is configured as http, users can check test result on OPNFV testing dashboard which uses MongoDB as backend. + +.. code:: bash + + [DEFAULT] + debug = False + dispatcher = http + + [dispatcher_http] + timeout = 5 + target = http://127.0.0.1:8000/results + +.. figure:: images/UseCaseDashboard.png + + +Detailing the dispatcher module in verify and daily Jobs: +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +KVM4NFV updates the dispatcher module in the yardstick configuration file(/etc/yardstick/yardstick.conf) depending on the Job type(Verify/Daily). +Once the test is completed, results are published to the respective dispatcher modules. + +Dispatcher module is configured for each Job type as mentioned below. + +1. ``Verify Job`` : Default "DISPATCHER_TYPE" i.e. file(/tmp/yardstick.out) is used. User can also see the test results on Jenkins console log. + +.. code:: bash + + *"max": "00030", "avg": "00006", "min": "00006"* + +2. ``Daily Job`` : Opnfv Influxdb url is configured as dispatcher module. + +.. code:: bash + + DISPATCHER_TYPE=influxdb + DISPATCHER_INFLUXDB_TARGET="http://104.197.68.199:8086" + +Influxdb only supports line protocol, and the json protocol is deprecated. + +For example, the raw_result of cyclictest in json format is: + :: + + "benchmark": { + "timestamp": 1478234859.065317, + "errors": "", + "data": { + "max": "00012", + "avg": "00008", + "min": "00007" + }, + "sequence": 1 + }, + "runner_id": 23 + } + + +With the help of "influxdb_line_protocol", the json is transformed as a line string: + :: + + 'kvmfornfv_cyclictest_idle_idle,deploy_scenario=unknown,host=kvm.LF, + installer=unknown,pod_name=unknown,runner_id=23,scenarios=Cyclictest, + task_id=e7be7516-9eae-406e-84b6-e931866fa793,version=unknown + avg="00008",max="00012",min="00007" 1478234859065316864' + + + +Influxdb api which is already implemented in `Influxdb`_ will post the data in line format into the database. + +``Displaying Results on Grafana dashboard:`` + +- Once the test results are stored in Influxdb, dashboard configuration file(Json) which used to display the cyclictest results +on Grafana need to be created by following the `Grafana-procedure`_ and then pushed into `yardstick-repo`_ + +- Grafana can be accessed at `Login`_ using credentials opnfv/opnfv and used for visualizing the collected test data as shown in `Visual`_\ + + +.. figure:: images/Dashboard-screenshot-1.png + :name: dashboard-screenshot-1 + :width: 100% + :align: center + +.. figure:: images/Dashboard-screenshot-2.png + :name: dashboard-screenshot-2 + :width: 100% + :align: center + +.. _Influxdb: https://git.opnfv.org/cgit/yardstick/tree/yardstick/dispatcher/influxdb.py + +.. _Visual: http://testresults.opnfv.org/grafana/dashboard/db/kvmfornfv-cyclictest + +.. _Login: http://testresults.opnfv.org/grafana/login + +.. _Grafana-procedure: https://wiki.opnfv.org/display/yardstick/How+to+work+with+grafana+dashboard + +.. _yardstick-repo: https://git.opnfv.org/cgit/yardstick/tree/dashboard/KVMFORNFV-Cyclictest + +.. _GrafanaDoc: http://docs.grafana.org/ + +Understanding Kvm4nfv Grafana Dashboard +--------------------------------------- + +The Kvm4nfv dashboard found at http://testresults.opnfv.org/ currently supports graphical view of cyclictest. For viewing Kvm4nfv dashboarduse, + +.. code:: bash + + http://testresults.opnfv.org/grafana/dashboard/db/kvmfornfv-cyclictest + + The login details are: + + Username: opnfv + Password: opnfv + + +.. code:: bash + + The JSON of the kvmfonfv-cyclictest dashboard can be found at., + + $ git clone https://gerrit.opnfv.org/gerrit/yardstick.git + $ cd yardstick/dashboard + $ cat KVMFORNFV-Cyclictest + +The Dashboard has four tables, each representing a specific test-type of cyclictest case, + +- Kvmfornfv_Cyclictest_Idle-Idle +- Kvmfornfv_Cyclictest_CPUstress-Idle +- Kvmfornfv_Cyclictest_Memorystress-Idle +- Kvmfornfv_Cyclictest_IOstress-Idle + +Note: + +- For all graphs, X-axis is marked with time stamps, Y-axis with value in microsecond units. + +**A brief about what each graph of the dashboard represents:** + +1. Idle-Idle Graph +~~~~~~~~~~~~~~~~~~~~ +`Idle-Idle`_ graph displays the Average, Maximum and Minimum latency values obtained by running Idle_Idle test-type of the cyclictest. +Idle_Idle implies that no stress is applied on the Host or the Guest. + +.. _Idle-Idle: http://testresults.opnfv.org/grafana/dashboard/db/kvmfornfv-cyclictest?panelId=10&fullscreen + +.. figure:: images/Idle-Idle.png + :name: Idle-Idle graph + :width: 100% + :align: center + +2. CPU_Stress-Idle Graph +~~~~~~~~~~~~~~~~~~~~~~~~~ +`Cpu_Stress-Idle`_ graph displays the Average, Maximum and Minimum latency values obtained by running Cpu-stress_Idle test-type of the cyclictest. +Cpu-stress_Idle implies that CPU stress is applied on the Host and no stress on the Guest. + +.. _Cpu_stress-Idle: http://testresults.opnfv.org/grafana/dashboard/db/kvmfornfv-cyclictest?panelId=11&fullscreen + +.. figure:: images/Cpustress-Idle.png + :name: cpustress-idle graph + :width: 100% + :align: center + +3. Memory_Stress-Idle Graph +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +`Memory_Stress-Idle`_ graph displays the Average, Maximum and Minimum latency values obtained by running Memory-stress_Idle test-type of the Cyclictest. +Memory-stress_Idle implies that Memory stress is applied on the Host and no stress on the Guest. + +.. _Memory_Stress-Idle: http://testresults.opnfv.org/grafana/dashboard/db/kvmfornfv-cyclictest?panelId=12&fullscreen + +.. figure:: images/Memorystress-Idle.png + :name: memorystress-idle graph + :width: 100% + :align: center + +4. IO_Stress-Idle Graph +~~~~~~~~~~~~~~~~~~~~~~~~~ +`IO_Stress-Idle`_ graph displays the Average, Maximum and Minimum latency values obtained by running IO-stress_Idle test-type of the Cyclictest. +IO-stress_Idle implies that IO stress is applied on the Host and no stress on the Guest. + +.. _IO_Stress-Idle: http://testresults.opnfv.org/grafana/dashboard/db/kvmfornfv-cyclictest?panelId=13&fullscreen + +.. figure:: images/IOstress-Idle.png + :name: iostress-idle graph + :width: 100% + :align: center + +Future Scope +------------- +The future work will include adding the kvmfornfv_Packet-forwarding test results into Grafana and influxdb. diff --git a/docs/release/userguide/live_migration.userguide.rst b/docs/release/userguide/live_migration.userguide.rst new file mode 100644 index 000000000..9fa9b82fd --- /dev/null +++ b/docs/release/userguide/live_migration.userguide.rst @@ -0,0 +1,121 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. + +.. http://creativecommons.org/licenses/by/4.0 + +Fast Live Migration +=================== + +The NFV project requires fast live migration. The specific requirement is +total live migration time < 2Sec, while keeping the VM down time < 10ms when +running DPDK L2 forwarding workload. + +We measured the baseline data of migrating an idle 8GiB guest running a DPDK L2 +forwarding work load and observed that the total live migration time was 2271ms +while the VM downtime was 26ms. Both of these two indicators failed to satisfy +the requirements. + +Current Challenges +------------------ + +The following 4 features have been developed over the years to make the live +migration process faster. + ++ XBZRLE: + Helps to reduce the network traffic by just sending the + compressed data. ++ RDMA: + Uses a specific NIC to increase the efficiency of data + transmission. ++ Multi thread compression: + Compresses the data before transmission. ++ Auto convergence: + Reduces the data rate of dirty pages. + +Tests show none of the above features can satisfy the requirement of NFV. +XBZRLE and Multi thread compression do the compression entirely in software and +they are not fast enough in a 10Gbps network environment. RDMA is not flexible +because it has to transport all the guest memory to the destination without zero +page optimization. Auto convergence is not appropriate for NFV because it will +impact guest’s performance. + +So we need to find other ways for optimization. + +Optimizations +------------------------- +a. Delay non-emergency operations + By profiling, it was discovered that some of the cleanup operations during + the stop and copy stage are the main reason for the long VM down time. The + cleanup operation includes stopping the dirty page logging, which is a time + consuming operation. By deferring these operations until the data transmission + is completed the VM down time is reduced to about 5-7ms. +b. Optimize zero page checking + Currently QEMU uses the SSE2 instruction to optimize the zero pages + checking. The SSE2 instruction can process 16 bytes per instruction. + By using the AVX2 instruction, we can process 32 bytes per instruction. + Testing shows that using AVX2 can speed up the zero pages checking process + by about 25%. +c. Remove unnecessary context synchronization. + The CPU context was being synchronized twice during live migration. Removing + this unnecessary synchronization shortened the VM downtime by about 100us. + +Test Environment +---------------- + +The source and destination host have the same hardware and OS: +:: +Host: HSW-EP +CPU: Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz +RAM: 64G +OS: RHEL 7.1 +Kernel: 4.2 +QEMU v2.4.0 + +Ethernet controller: Intel Corporation Ethernet Controller 10-Gigabit +X540-AT2 (rev 01) +QEMU parameters: +:: +${qemu} -smp ${guest_cpus} -monitor unix:${qmp_sock},server,nowait -daemonize \ +-cpu host,migratable=off,+invtsc,+tsc-deadline,pmu=off \ +-realtime mlock=on -mem-prealloc -enable-kvm -m 1G \ +-mem-path /mnt/hugetlbfs-1g \ +-drive file=/root/minimal-centos1.qcow2,cache=none,aio=threads \ +-netdev user,id=guest0,hostfwd=tcp:5555-:22 \ +-device virtio-net-pci,netdev=guest0 \ +-nographic -serial /dev/null -parallel /dev/null + +Network connection + +.. figure:: lmnetwork.jpg + :align: center + :alt: live migration network connection + :figwidth: 80% + + +Test Result +----------- +The down time is set to 10ms when doing the test. We use pktgen to send the +packages to guest, the package size is 64 bytes, and the line rate is 2013 +Mbps. + +a. Total live migration time + + The total live migration time before and after optimization is shown in the + chart below. For an idle guest, we can reduce the total live migration time + from 2070ms to 401ms. For a guest running the DPDK L2 forwarding workload, + the total live migration time is reduced from 2271ms to 654ms. + +.. figure:: lmtotaltime.jpg + :align: center + :alt: total live migration time + +b. VM downtime + + The VM down time before and after optimization is shown in the chart below. + For an idle guest, we can reduce the VM down time from 29ms to 9ms. For a guest + running the DPDK L2 forwarding workload, the VM down time is reduced from 26ms to + 5ms. + +.. figure:: lmdowntime.jpg + :align: center + :alt: vm downtime + :figwidth: 80% diff --git a/docs/release/userguide/lmdowntime.jpg b/docs/release/userguide/lmdowntime.jpg Binary files differnew file mode 100644 index 000000000..c9faa4c73 --- /dev/null +++ b/docs/release/userguide/lmdowntime.jpg diff --git a/docs/release/userguide/lmnetwork.jpg b/docs/release/userguide/lmnetwork.jpg Binary files differnew file mode 100644 index 000000000..8a9a324c3 --- /dev/null +++ b/docs/release/userguide/lmnetwork.jpg diff --git a/docs/release/userguide/lmtotaltime.jpg b/docs/release/userguide/lmtotaltime.jpg Binary files differnew file mode 100644 index 000000000..2dced3987 --- /dev/null +++ b/docs/release/userguide/lmtotaltime.jpg diff --git a/docs/release/userguide/low_latency.userguide.rst b/docs/release/userguide/low_latency.userguide.rst new file mode 100644 index 000000000..f027b4939 --- /dev/null +++ b/docs/release/userguide/low_latency.userguide.rst @@ -0,0 +1,264 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. + +.. http://creativecommons.org/licenses/by/4.0 + +Low Latency Environment +======================= + +Achieving low latency with the KVM4NFV project requires setting up a special +test environment. This environment includes the BIOS settings, kernel +configuration, kernel parameters and the run-time environment. + +Hardware Environment Description +-------------------------------- + +BIOS setup plays an important role in achieving real-time latency. A collection +of relevant settings, used on the platform where the baseline performance data +was collected, is detailed below: + +CPU Features +~~~~~~~~~~~~ + +Some special CPU features like TSC-deadline timer, invariant TSC and Process +posted interrupts, etc, are helpful for latency reduction. + +CPU Topology +~~~~~~~~~~~~ + +NUMA topology is also important for latency reduction. + +BIOS Setup +~~~~~~~~~~ + +Careful BIOS setup is important in achieving real time latency. Different +platforms have different BIOS setups, below are the important BIOS settings on +the platform used to collect the baseline performance data. + +Software Environment Setup +-------------------------- +Both the host and the guest environment need to be configured properly to +reduce latency variations. Below are some suggested kernel configurations. +The ci/envs/ directory gives detailed implementation on how to setup the +environment. + +Kernel Parameter +~~~~~~~~~~~~~~~~ + +Please check the default kernel configuration in the source code at: +kernel/arch/x86/configs/opnfv.config. + +Below is host kernel boot line example: + +.. code:: bash + + isolcpus=11-15,31-35 nohz_full=11-15,31-35 rcu_nocbs=11-15,31-35 + iommu=pt intel_iommu=on default_hugepagesz=1G hugepagesz=1G mce=off idle=poll + intel_pstate=disable processor.max_cstate=1 pcie_asmp=off tsc=reliable + +Below is guest kernel boot line example + +.. code:: bash + + isolcpus=1 nohz_full=1 rcu_nocbs=1 mce=off idle=poll default_hugepagesz=1G + hugepagesz=1G + +Please refer to `tuning.userguide` for more explanation. + +Run-time Environment Setup +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Not only are special kernel parameters needed but a special run-time +environment is also required. Please refer to `tunning.userguide` for +more explanation. + +Test cases to measure Latency +----------------------------- +The performance of the kvm4nfv is assesed by the latency values. Cyclictest and Packet forwarding +Test cases result in real time latency values of average, minimum and maximum. + +* Cyclictest + +* Packet Forwarding test + +1. Cyclictest case +------------------- +Cyclictest results are the most frequently cited real-time Linux metric. The core concept of Cyclictest is very simple. +In KVM4NFV cyclictest is implemented on the Guest-VM with 4.4-Kernel RPM installed. It generated Max,Min and Avg +values which help in assesing the kernel used. Cyclictest in currently divided into the following test types, + +* Idle-Idle +* CPU_stress-Idle +* Memory_stress-Idle +* IO_stress-Idle + +Future scope of work may include the below test-types, + +* CPU_stress-CPU_stress +* Memory_stress-Memory_stress +* IO_stress-IO_stress + +Understanding the naming convention +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +.. code:: bash + + [Host-Type ] - [Guest-Type] + +* **Host-Type :** Mentions the type of stress applied on the kernel of the Host +* **Guest-Type :** Mentions the type of stress applied on the kernel of the Guest + +Example., + +.. code:: bash + + Idle - CPU_stress + +The above name signifies that, + +- No Stress is applied on the Host kernel + +- CPU Stress is applied on the Guest kernel + +**Note:** + +- Stress is applied using the stress which is installed as part of the deployment. + Stress can be applied on CPU, Memory and Input-Output (Read/Write) operations using the stress tool. + +Version Features +~~~~~~~~~~~~~~~~ + ++-----------------------+------------------+-----------------+ +| **Test Name** | **Colorado** | **Danube** | +| | | | ++-----------------------+------------------+-----------------+ +| - Idle - Idle | ``Y`` | ``Y`` | ++-----------------------+------------------+-----------------+ +| - Cpustress - Idle | | ``Y`` | ++-----------------------+------------------+-----------------+ +| - Memorystress - Idle | | ``Y`` | ++-----------------------+------------------+-----------------+ +| - IOstress - Idle | | ``Y`` | ++-----------------------+------------------+-----------------+ + + +Idle-Idle test-type +~~~~~~~~~~~~~~~~~~~ +Cyclictest in run on the Guest VM when Host,Guest are not under any kind of stress. This is the basic +cyclictest of the KVM4NFV project. Outputs Avg, Min and Max latency values. + +.. figure:: images/idle-idle-test-type.png + :name: idle-idle test type + :width: 100% + :align: center + +CPU_Stress-Idle test-type +~~~~~~~~~~~~~~~~~~~~~~~~~ +Here, the host is under CPU stress, where multiple times sqrt() function is called on kernel which +results increased CPU load. The cyclictest will run on the guest, where the guest is under no stress. +Outputs Avg, Min and Max latency values. + +.. figure:: images/cpu-stress-idle-test-type.png + :name: cpu-stress-idle test type + :width: 100% + :align: center + +Memory_Stress-Idle test-type +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +In this type, the host is under memory stress where continuos memory operations are implemented to +increase the Memory stress (Buffer stress).The cyclictest will run on the guest, where the guest is under +no stress. It outputs Avg, Min and Max latency values. + +.. figure:: images/memory-stress-idle-test-type.png + :name: memory-stress-idle test type + :width: 100% + :align: center + +IO_Stress-Idle test-type +~~~~~~~~~~~~~~~~~~~~~~~~ +The host is under constant Input/Output stress .i.e., multiple read-write operations are invoked to +increase stress. Cyclictest will run on the guest VM that is launched on the same host, where the guest +is under no stress. It outputs Avg, Min and Max latency values. + +.. figure:: images/io-stress-idle-test-type.png + :name: io-stress-idle test type + :width: 100% + :align: center + +CPU_Stress-CPU_Stress test-type +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Not implemented for Danube release. + +Memory_Stress-Memory_Stress test-type +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Not implemented for Danube release. + +IO_Stress-IO_Stress test type +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Not implemented for Danube release. + +2. Packet Forwarding Test cases +------------------------------- +Packet forwarding is an other test case of Kvm4nfv. It measures the time taken by a packet to return +to source after reaching its destination. This test case uses automated test-framework provided by +OPNFV VSWITCHPERF project and a traffic generator (IXIA is used for kvm4nfv). Only latency results +generating test cases are triggered as a part of kvm4nfv daily job. + +Latency test measures the time required for a frame to travel from the originating device through the +network to the destination device. Please note that RFC2544 Latency measurement will be superseded with +a measurement of average latency over all successfully transferred packets or frames. + +Packet forwarding test cases currently supports the following test types: + +* Packet forwarding to Host + +* Packet forwarding to Guest + +* Packet forwarding to Guest using SRIOV + +The testing approach adoped is black box testing, meaning the test inputs can be generated and the +outputs captured and completely evaluated from the outside of the System Under Test(SUT). + +Packet forwarding to Host +~~~~~~~~~~~~~~~~~~~~~~~~~ +This is also known as Physical port → vSwitch → physical port deployment. +This test measures the time taken by the packet/frame generated by traffic generator(phy) to travel +through the network to the destination device(phy). This test results min,avg and max latency values. +This value signifies the performance of the installed kernel. + +Packet flow, + +.. figure:: images/host_pk_fw.png + :name: packet forwarding to host + :width: 100% + :align: center + +Packet forwarding to Guest +~~~~~~~~~~~~~~~~~~~~~~~~~~ +This is also known as Physical port → vSwitch → VNF → vSwitch → physical port deployment. + +This test measures the time taken by the packet/frame generated by traffic generator(phy) to travel +through the network involving a guest to the destination device(phy). This test results min,avg and +max latency values. This value signifies the performance of the installed kernel. + +Packet flow, + +.. figure:: images/guest_pk_fw.png + :name: packet forwarding to guest + :width: 100% + :align: center + +Packet forwarding to Guest using SRIOV +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +This test is used to verify the VNF and measure the base performance (maximum forwarding rate in +fps and latency) that can be achieved by the VNF without a vSwitch. The performance metrics +collected by this test will serve as a key comparison point for NIC passthrough technologies and +vSwitches. VNF in this context refers to the hypervisor and the VM. + +**Note:** The Vsperf running on the host is still required. + +Packet flow, + +.. figure:: images/sriov_pk_fw.png + :name: packet forwarding to guest using sriov + :width: 100% + :align: center diff --git a/docs/release/userguide/openstack.rst b/docs/release/userguide/openstack.rst new file mode 100644 index 000000000..929d2ba42 --- /dev/null +++ b/docs/release/userguide/openstack.rst @@ -0,0 +1,51 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. + +.. http://creativecommons.org/licenses/by/4.0 + +============================ +Danube OpenStack User Guide +============================ + +OpenStack is a cloud operating system developed and released by the +`OpenStack project <https://www.openstack.org>`_. OpenStack is used in OPNFV +for controlling pools of compute, storage, and networking resources in a Pharos +compliant infrastructure. + +OpenStack is used in Danube to manage tenants (known in OpenStack as +projects),users, services, images, flavours, and quotas across the Pharos +infrastructure.The OpenStack interface provides the primary interface for an +operational Danube deployment and it is from the "horizon console" that an +OPNFV user will perform the majority of administrative and operational +activities on the deployment. + +OpenStack references +-------------------- + +The `OpenStack user guide <http://docs.openstack.org/user-guide>`_ provides +details and descriptions of how to configure and interact with the OpenStack +deployment.This guide can be used by lab engineers and operators to tune the +OpenStack deployment to your liking. + +Once you have configured OpenStack to your purposes, or the Danube +deployment meets your needs as deployed, an operator, or administrator, will +find the best guidance for working with OpenStack in the +`OpenStack administration guide <http://docs.openstack.org/user-guide-admin>`_. + +Connecting to the OpenStack instance +------------------------------------ + +Once familiar with the basic of working with OpenStack you will want to connect +to the OpenStack instance via the Horizon Console. The Horizon console provide +a Web based GUI that will allow you operate the deployment. +To do this you should open a browser on the JumpHost to the following address +and enter the username and password: + + + http://{Controller-VIP}:80/index.html> + username: admin + password: admin + +Other methods of interacting with and configuring OpenStack,, like the REST API +and CLI are also available in the Danube deployment, see the +`OpenStack administration guide <http://docs.openstack.org/user-guide-admin>`_ +for more information on using those interfaces. diff --git a/docs/release/userguide/packet_forwarding.userguide.rst b/docs/release/userguide/packet_forwarding.userguide.rst new file mode 100644 index 000000000..31341a908 --- /dev/null +++ b/docs/release/userguide/packet_forwarding.userguide.rst @@ -0,0 +1,633 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. + +.. http://creativecommons.org/licenses/by/4.0 + +================= +Packet Forwarding +================= + +About Packet Forwarding +----------------------- + +Packet Forwarding is a test suite of KVM4NFV. These latency tests measures the time taken by a +**Packet** generated by the traffic generator to travel from the originating device through the +network to the destination device. Packet Forwarding is implemented using test framework +implemented by OPNFV VSWITCHPERF project and an ``IXIA Traffic Generator``. + +Version Features +---------------- + ++-----------------------------+---------------------------------------------------+ +| | | +| **Release** | **Features** | +| | | ++=============================+===================================================+ +| | - Packet Forwarding is not part of Colorado | +| Colorado | release of KVM4NFV | +| | | ++-----------------------------+---------------------------------------------------+ +| | - Packet Forwarding is a testcase in KVM4NFV | +| | - Implements three scenarios (Host/Guest/SRIOV) | +| | as part of testing in KVM4NFV | +| Danube | - Uses automated test framework of OPNFV | +| | VSWITCHPERF software (PVP/PVVP) | +| | - Works with IXIA Traffic Generator | ++-----------------------------+---------------------------------------------------+ + +VSPERF +------ + +VSPerf is an OPNFV testing project. +VSPerf will develop a generic and architecture agnostic vSwitch testing framework and associated +tests, that will serve as a basis for validating the suitability of different vSwitch +implementations in a Telco NFV deployment environment. The output of this project will be utilized +by the OPNFV Performance and Test group and its associated projects, as part of OPNFV Platform and +VNF level testing and validation. + +For complete VSPERF documentation go to `link.`_ + +.. _link.: http://artifacts.opnfv.org/vswitchperf/danube/index.html + + +Installation +~~~~~~~~~~~~ + +Guidelines of installating `VSPERF`_. + +.. _VSPERF: http://artifacts.opnfv.org/vswitchperf/colorado/configguide/index.html + +Supported Operating Systems +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +* CentOS 7 +* Fedora 20 +* Fedora 21 +* Fedora 22 +* RedHat 7.2 +* Ubuntu 14.04 + +Supported vSwitches +~~~~~~~~~~~~~~~~~~~ + +The vSwitch must support Open Flow 1.3 or greater. + +* OVS (built from source). +* OVS with DPDK (built from source). + +Supported Hypervisors +~~~~~~~~~~~~~~~~~~~~~ + +* Qemu version 2.6. + +Other Requirements +~~~~~~~~~~~~~~~~~~ + +The test suite requires Python 3.3 and relies on a number of other +packages. These need to be installed for the test suite to function. + +Installation of required packages, preparation of Python 3 virtual +environment and compilation of OVS, DPDK and QEMU is performed by +script **systems/build_base_machine.sh**. It should be executed under +user account, which will be used for vsperf execution. + + **Please Note:** Password-less sudo access must be configured for given user before script is executed. + +Execution of installation script: + +.. code:: bash + + $ cd vswitchperf + $ cd systems + $ ./build_base_machine.sh + +Script **build_base_machine.sh** will install all the vsperf dependencies +in terms of system packages, Python 3.x and required Python modules. +In case of CentOS 7 it will install Python 3.3 from an additional repository +provided by Software Collections (`a link`_). In case of RedHat 7 it will +install Python 3.4 as an alternate installation in /usr/local/bin. Installation +script will also use `virtualenv`_ to create a vsperf virtual environment, +which is isolated from the default Python environment. This environment will +reside in a directory called **vsperfenv** in $HOME. + +You will need to activate the virtual environment every time you start a +new shell session. Its activation is specific to your OS: + +For running testcases VSPERF is installed on Intel pod1-node2 in which centos +operating system is installed. Only VSPERF installion on Centos is discussed here. +For installation steps on other operating systems please refer to `here`_. + +.. _here: http://artifacts.opnfv.org/vswitchperf/colorado/configguide/index.html + +For CentOS 7 +~~~~~~~~~~~~~~ + +## Python 3 Packages + +To avoid file permission errors and Python version issues, use virtualenv to create an isolated environment with Python3. +The required Python 3 packages can be found in the `requirements.txt` file in the root of the test suite. +They can be installed in your virtual environment like so: + +.. code:: bash + + scl enable python33 bash + # Create virtual environment + virtualenv vsperfenv + cd vsperfenv + source bin/activate + pip install -r requirements.txt + + +You need to activate the virtual environment every time you start a new shell session. +To activate, simple run: + +.. code:: bash + + scl enable python33 bash + cd vsperfenv + source bin/activate + + +Working Behind a Proxy +~~~~~~~~~~~~~~~~~~~~~~ + +If you're behind a proxy, you'll likely want to configure this before running any of the above. For example: + +.. code:: bash + + export http_proxy="http://<username>:<password>@<proxy>:<port>/"; + export https_proxy="https://<username>:<password>@<proxy>:<port>/"; + export ftp_proxy="ftp://<username>:<password>@<proxy>:<port>/"; + export socks_proxy="socks://<username>:<password>@<proxy>:<port>/"; + +.. _a link: http://www.softwarecollections.org/en/scls/rhscl/python33/ +.. _virtualenv: https://virtualenv.readthedocs.org/en/latest/ + +For other OS specific activation click `this link`_: + +.. _this link: http://artifacts.opnfv.org/vswitchperf/colorado/configguide/installation.html#other-requirements + +Traffic-Generators +------------------ + +VSPERF supports many Traffic-generators. For configuring VSPERF to work with the available traffic-generator go through `this`_. + +.. _this: http://artifacts.opnfv.org/vswitchperf/colorado/configguide/trafficgen.html + +VSPERF supports the following traffic generators: + + * Dummy (DEFAULT): Allows you to use your own external + traffic generator. + * IXIA (IxNet and IxOS) + * Spirent TestCenter + * Xena Networks + * MoonGen + +To see the list of traffic gens from the cli: + +.. code-block:: console + + $ ./vsperf --list-trafficgens + +This guide provides the details of how to install +and configure the various traffic generators. + +As KVM4NFV uses only IXIA traffic generator, it is discussed here. For complete documentation regarding traffic generators please follow this `link`_. + +.. _link: https://gerrit.opnfv.org/gerrit/gitweb?p=vswitchperf.git;a=blob;f=docs/configguide/trafficgen.rst;h=85fc35b886d30db3b92a6b7dcce7ca742b70cbdc;hb=HEAD + +IXIA Setup +---------- + +Hardware Requirements +~~~~~~~~~~~~~~~~~~~~~ + +VSPERF requires the following hardware to run tests: IXIA traffic generator (IxNetwork), a machine that +runs the IXIA client software and a CentOS Linux release 7.1.1503 (Core) host. + +Installation +~~~~~~~~~~~~ + +Follow the installation instructions to install. + +On the CentOS 7 system +~~~~~~~~~~~~~~~~~~~~~~ + +You need to install IxNetworkTclClient$(VER_NUM)Linux.bin.tgz. + +On the IXIA client software system +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Find the IxNetwork TCL server app (start -> All Programs -> IXIA -> IxNetwork -> IxNetwork_$(VER_NUM) -> IxNetwork TCL Server) + - Right click on IxNetwork TCL Server, select properties + - Under shortcut tab in the Target dialogue box make sure there is the argument "-tclport xxxx" + +where xxxx is your port number (take note of this port number you will need it for the 10_custom.conf file). + +.. figure:: images/IXIA1.png + :name: IXIA1 setup + :width: 100% + :align: center + +- Hit Ok and start the TCL server application + +VSPERF configuration +-------------------- + +There are several configuration options specific to the IxNetworks traffic generator +from IXIA. It is essential to set them correctly, before the VSPERF is executed +for the first time. + +Detailed description of options follows: + + * TRAFFICGEN_IXNET_MACHINE - IP address of server, where IxNetwork TCL Server is running + * TRAFFICGEN_IXNET_PORT - PORT, where IxNetwork TCL Server is accepting connections from + TCL clients + * TRAFFICGEN_IXNET_USER - username, which will be used during communication with IxNetwork + TCL Server and IXIA chassis + * TRAFFICGEN_IXIA_HOST - IP address of IXIA traffic generator chassis + * TRAFFICGEN_IXIA_CARD - identification of card with dedicated ports at IXIA chassis + * TRAFFICGEN_IXIA_PORT1 - identification of the first dedicated port at TRAFFICGEN_IXIA_CARD + at IXIA chassis; VSPERF uses two separated ports for traffic generation. In case of + unidirectional traffic, it is essential to correctly connect 1st IXIA port to the 1st NIC + at DUT, i.e. to the first PCI handle from WHITELIST_NICS list. Otherwise traffic may not + be able to pass through the vSwitch. + * TRAFFICGEN_IXIA_PORT2 - identification of the second dedicated port at TRAFFICGEN_IXIA_CARD + at IXIA chassis; VSPERF uses two separated ports for traffic generation. In case of + unidirectional traffic, it is essential to correctly connect 2nd IXIA port to the 2nd NIC + at DUT, i.e. to the second PCI handle from WHITELIST_NICS list. Otherwise traffic may not + be able to pass through the vSwitch. + * TRAFFICGEN_IXNET_LIB_PATH - path to the DUT specific installation of IxNetwork TCL API + * TRAFFICGEN_IXNET_TCL_SCRIPT - name of the TCL script, which VSPERF will use for + communication with IXIA TCL server + * TRAFFICGEN_IXNET_TESTER_RESULT_DIR - folder accessible from IxNetwork TCL server, + where test results are stored, e.g. ``c:/ixia_results``; see test-results-share_ + * TRAFFICGEN_IXNET_DUT_RESULT_DIR - directory accessible from the DUT, where test + results from IxNetwork TCL server are stored, e.g. ``/mnt/ixia_results``; see + test-results-share_ + +.. _test-results-share: + +Test results share +~~~~~~~~~~~~~~~~~~ + +VSPERF is not able to retrieve test results via TCL API directly. Instead, all test +results are stored at IxNetwork TCL server. Results are stored at folder defined by +``TRAFFICGEN_IXNET_TESTER_RESULT_DIR`` configuration parameter. Content of this +folder must be shared (e.g. via samba protocol) between TCL Server and DUT, where +VSPERF is executed. VSPERF expects, that test results will be available at directory +configured by ``TRAFFICGEN_IXNET_DUT_RESULT_DIR`` configuration parameter. + +Example of sharing configuration: + + * Create a new folder at IxNetwork TCL server machine, e.g. ``c:\ixia_results`` + * Modify sharing options of ``ixia_results`` folder to share it with everybody + * Create a new directory at DUT, where shared directory with results + will be mounted, e.g. ``/mnt/ixia_results`` + * Update your custom VSPERF configuration file as follows: + + .. code-block:: python + + TRAFFICGEN_IXNET_TESTER_RESULT_DIR = 'c:/ixia_results' + TRAFFICGEN_IXNET_DUT_RESULT_DIR = '/mnt/ixia_results' + + Note: It is essential to use slashes '/' also in path + configured by ``TRAFFICGEN_IXNET_TESTER_RESULT_DIR`` parameter. + +* Install cifs-utils package. + + e.g. at rpm based Linux distribution: + +.. code-block:: console + + yum install cifs-utils + +* Mount shared directory, so VSPERF can access test results. + + e.g. by adding new record into ``/etc/fstab`` + +.. code-block:: console + + mount -t cifs //_TCL_SERVER_IP_OR_FQDN_/ixia_results /mnt/ixia_results + -o file_mode=0777,dir_mode=0777,nounix + +It is recommended to verify, that any new file inserted into ``c:/ixia_results`` folder +is visible at DUT inside ``/mnt/ixia_results`` directory. + + +Cloning and building src dependencies +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +In order to run VSPERF, you will need to download DPDK and OVS. You can do this manually and build +them in a preferred location, or you could use vswitchperf/src. The vswitchperf/src directory +contains makefiles that will allow you to clone and build the libraries that VSPERF depends on, +such as DPDK and OVS. To clone and build simply: + +.. code:: bash + + cd src + make + +To delete a src subdirectory and its contents to allow you to re-clone simply use: + +.. code:: bash + + make cleanse + +Configure the `./conf/10_custom.conf` file +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The supplied `10_custom.conf` file must be modified, as it contains configuration items for which there are no reasonable default values. + +The configuration items that can be added is not limited to the initial contents. Any configuration item +mentioned in any .conf file in `./conf` directory can be added and that item will be overridden by the custom +configuration value. + +Using a custom settings file +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Alternatively a custom settings file can be passed to `vsperf` via the `--conf-file` argument. + +.. code:: bash + + ./vsperf --conf-file <path_to_settings_py> ... + +Note that configuration passed in via the environment (`--load-env`) or via another command line +argument will override both the default and your custom configuration files. This +"priority hierarchy" can be described like so (1 = max priority): + +1. Command line arguments +2. Environment variables +3. Configuration file(s) + +vloop_vnf +~~~~~~~~~ + +VSPERF uses a VM image called vloop_vnf for looping traffic in the deployment +scenarios involving VMs. The image can be downloaded from +`<http://artifacts.opnfv.org/>`__. + +Please see the installation instructions for information on :ref:`vloop-vnf` +images. + +.. _l2fwd-module: + +l2fwd Kernel Module +~~~~~~~~~~~~~~~~~~~ + +A Kernel Module that provides OSI Layer 2 Ipv4 termination or forwarding with +support for Destination Network Address Translation (DNAT) for both the MAC and +IP addresses. l2fwd can be found in <vswitchperf_dir>/src/l2fwd + +Executing tests +~~~~~~~~~~~~~~~~ + +Before running any tests make sure you have root permissions by adding the following line to /etc/sudoers: +.. code:: bash + + username ALL=(ALL) NOPASSWD: ALL + +username in the example above should be replaced with a real username. + +To list the available tests: + +.. code:: bash + + ./vsperf --list-tests + + +To run a group of tests, for example all tests with a name containing +'RFC2544': + +.. code:: bash + + ./vsperf --conf-file=user_settings.py --tests="RFC2544" + +To run all tests: + +.. code:: bash + + ./vsperf --conf-file=user_settings.py + +Some tests allow for configurable parameters, including test duration (in seconds) as well as packet sizes (in bytes). + +.. code:: bash + + ./vsperf --conf-file user_settings.py + --tests RFC2544Tput + --test-param` "rfc2544_duration=10;packet_sizes=128" + +For all available options, check out the help dialog: + +.. code:: bash + + ./vsperf --help + + +Testcases +---------- + +Available Tests in VSPERF are: + + * phy2phy_tput + * phy2phy_forwarding + * back2back + * phy2phy_tput_mod_vlan + * phy2phy_cont + * pvp_cont + * pvvp_cont + * pvpv_cont + * phy2phy_scalability + * pvp_tput + * pvp_back2back + * pvvp_tput + * pvvp_back2back + * phy2phy_cpu_load + * phy2phy_mem_load + +VSPERF modes of operation +-------------------------- + +VSPERF can be run in different modes. By default it will configure vSwitch, +traffic generator and VNF. However it can be used just for configuration +and execution of traffic generator. Another option is execution of all +components except traffic generator itself. + +Mode of operation is driven by configuration parameter -m or --mode + +.. code-block:: console + + -m MODE, --mode MODE vsperf mode of operation; + Values: + "normal" - execute vSwitch, VNF and traffic generator + "trafficgen" - execute only traffic generator + "trafficgen-off" - execute vSwitch and VNF + "trafficgen-pause" - execute vSwitch and VNF but wait before traffic transmission + +In case, that VSPERF is executed in "trafficgen" mode, then configuration +of traffic generator can be modified through ``TRAFFIC`` dictionary passed to the +``--test-params`` option. It is not needed to specify all values of ``TRAFFIC`` +dictionary. It is sufficient to specify only values, which should be changed. +Detailed description of ``TRAFFIC`` dictionary can be found at: ref:`configuration-of-traffic-dictionary`. + +Example of execution of VSPERF in "trafficgen" mode: + +.. code-block:: console + + $ ./vsperf -m trafficgen --trafficgen IxNet --conf-file vsperf.conf \ + --test-params "TRAFFIC={'traffic_type':'rfc2544_continuous','bidir':'False','framerate':60}" + + +Packet Forwarding Test Scenarios +-------------------------------- + +KVM4NFV currently implements three scenarios as part of testing: + + * Host Scenario + * Guest Scenario. + * SR-IOV Scenario. + + +Packet Forwarding Host Scenario +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Here host DUT has VSPERF installed in it and is properly configured to use IXIA Traffic-generator +by providing IXIA CARD, PORTS and Lib paths along with IP. +please refer to figure.2 + +.. figure:: images/Host_Scenario.png + :name: Host_Scenario + :width: 100% + :align: center + +Packet Forwarding Guest Scenario +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Here the guest is a Virtual Machine (VM) launched by using vloop_vnf provided by vsperf project +on host/DUT using Qemu. In this latency test the time taken by the frame/packet to travel from the +originating device through network involving a guest to destination device is calculated. +The resulting latency values will define the performance of installed kernel. + +.. figure:: images/Guest_Scenario.png + :name: Guest_Scenario + :width: 100% + :align: center + +Packet Forwarding SRIOV Scenario +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +In this test the packet generated at the IXIA is forwarded to the Guest VM launched on Host by +implementing SR-IOV interface at NIC level of host .i.e., DUT. The time taken by the packet to +travel through the network to the destination the IXIA traffic-generator is calculated and +published as a test result for this scenario. + +SRIOV-support_ is given below, it details how to use SR-IOV. + +.. figure:: images/SRIOV_Scenario.png + :name: SRIOV_Scenario + :width: 100% + :align: center + +Using vfio_pci with DPDK +~~~~~~~~~~~~~~~~~~~~~~~~~ + +To use vfio with DPDK instead of igb_uio add into your custom configuration +file the following parameter: + +.. code-block:: python + + PATHS['dpdk']['src']['modules'] = ['uio', 'vfio-pci'] + + +**NOTE:** In case, that DPDK is installed from binary package, then please + + set ``PATHS['dpdk']['bin']['modules']`` instead. + +**NOTE:** Please ensure that Intel VT-d is enabled in BIOS. + +**NOTE:** Please ensure your boot/grub parameters include +the following: + +.. code-block:: console + + iommu=pt intel_iommu=on + +To check that IOMMU is enabled on your platform: + +.. code-block:: console + + $ dmesg | grep IOMMU + [ 0.000000] Intel-IOMMU: enabled + [ 0.139882] dmar: IOMMU 0: reg_base_addr fbffe000 ver 1:0 cap d2078c106f0466 ecap f020de + [ 0.139888] dmar: IOMMU 1: reg_base_addr ebffc000 ver 1:0 cap d2078c106f0466 ecap f020de + [ 0.139893] IOAPIC id 2 under DRHD base 0xfbffe000 IOMMU 0 + [ 0.139894] IOAPIC id 0 under DRHD base 0xebffc000 IOMMU 1 + [ 0.139895] IOAPIC id 1 under DRHD base 0xebffc000 IOMMU 1 + [ 3.335744] IOMMU: dmar0 using Queued invalidation + [ 3.335746] IOMMU: dmar1 using Queued invalidation + .... + +.. _SRIOV-support: + +Using SRIOV support +~~~~~~~~~~~~~~~~~~~ + +To use virtual functions of NIC with SRIOV support, use extended form +of NIC PCI slot definition: + +.. code-block:: python + + WHITELIST_NICS = ['0000:03:00.0|vf0', '0000:03:00.1|vf3'] + +Where ``vf`` is an indication of virtual function usage and following +number defines a VF to be used. In case that VF usage is detected, +then vswitchperf will enable SRIOV support for given card and it will +detect PCI slot numbers of selected VFs. + +So in example above, one VF will be configured for NIC '0000:05:00.0' +and four VFs will be configured for NIC '0000:05:00.1'. Vswitchperf +will detect PCI addresses of selected VFs and it will use them during +test execution. + +At the end of vswitchperf execution, SRIOV support will be disabled. + +SRIOV support is generic and it can be used in different testing scenarios. +For example: + + +* vSwitch tests with DPDK or without DPDK support to verify impact + of VF usage on vSwitch performance +* tests without vSwitch, where traffic is forwared directly + between VF interfaces by packet forwarder (e.g. testpmd application) +* tests without vSwitch, where VM accesses VF interfaces directly + by PCI-passthrough to measure raw VM throughput performance. + +Using QEMU with PCI passthrough support +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Raw virtual machine throughput performance can be measured by execution of PVP +test with direct access to NICs by PCI passthrough. To execute VM with direct +access to PCI devices, enable vfio-pci. In order to use virtual functions, +SRIOV-support_ must be enabled. + +Execution of test with PCI passthrough with vswitch disabled: + +.. code-block:: console + + $ ./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf \ + --vswitch none --vnf QemuPciPassthrough pvp_tput + +Any of supported guest-loopback-application can be used inside VM with +PCI passthrough support. + +Note: Qemu with PCI passthrough support can be used only with PVP test +deployment. + +Results +~~~~~~~ + +The results for the packet forwarding test cases are uploaded to artifacts. +The link for the same can be found below + +.. code:: bash + + http://artifacts.opnfv.org/kvmfornfv.html diff --git a/docs/release/userguide/pcm_utility.userguide.rst b/docs/release/userguide/pcm_utility.userguide.rst new file mode 100644 index 000000000..6695d50c0 --- /dev/null +++ b/docs/release/userguide/pcm_utility.userguide.rst @@ -0,0 +1,141 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. + +.. http://creativecommons.org/licenses/by/4.0 + +====================== +PCM Utility in KVM4NFV +====================== + +Collecting Memory Bandwidth Information using PCM utility +--------------------------------------------------------- +This chapter includes how the PCM utility is used in kvm4nfv +to collect memory bandwidth information + +About PCM utility +----------------- +The Intel® Performance Counter Monitor provides sample C++ routines and utilities to estimate the +internal resource utilization of the latest Intel® Xeon® and Core™ processors and gain a significant +performance boost.In Intel PCM toolset,there is a pcm-memory.x tool which is used for observing the +memory traffic intensity + +Version Features +----------------- + ++-----------------------------+-----------------------------------------------+ +| | | +| **Release** | **Features** | +| | | ++=============================+===============================================+ +| | - In Colorado release,we don't have memory | +| Colorado | bandwidth information collected through the | +| | cyclic testcases. | +| | | ++-----------------------------+-----------------------------------------------+ +| | - pcm-memory.x will be executed before the | +| Danube | execution of every testcase | +| | - pcm-memory.x provides the memory bandwidth | +| | data throughout out the testcases | +| | - used for all test-types (stress/idle) | +| | - Generated memory bandwidth logs are | +| | published to the KVMFORFNV artifacts | ++-----------------------------+-----------------------------------------------+ + +Implementation of pcm-memory.x: +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The tool measures the memory bandwidth observed for every channel reporting seperate throughput +for reads from memory and writes to the memory. pcm-memory.x tool tends to report values slightly +higher than the application's own measurement. + +Command: + +.. code:: bash + + sudo ./pcm-memory.x [Delay]/[external_program] + +Parameters + +- pcm-memory can called with either delay or external_program/application as a parameter + +- If delay is given as 5,then the output will be produced with refresh of every 5 seconds. + +- If external_program is script/application,then the output will produced after the execution of the application or the script passed as a parameter. + +**Sample Output:** + + The output produced with default refresh of 1 second. + ++---------------------------------------+---------------------------------------+ +| Socket 0 | Socket 1 | ++=======================================+=======================================+ +| Memory Performance Monitoring | Memory Performance Monitoring | +| | | ++---------------------------------------+---------------------------------------+ +| Mem Ch 0: Reads (MB/s): 6870.81 | Mem Ch 0: Reads (MB/s): 7406.36 | +| Writes(MB/s): 1805.03 | Writes(MB/s): 1951.25 | +| Mem Ch 1: Reads (MB/s): 6873.91 | Mem Ch 1: Reads (MB/s): 7411.11 | +| Writes(MB/s): 1810.86 | Writes(MB/s): 1957.73 | +| Mem Ch 2: Reads (MB/s): 6866.77 | Mem Ch 2: Reads (MB/s): 7403.39 | +| Writes(MB/s): 1804.38 | Writes(MB/s): 1951.42 | +| Mem Ch 3: Reads (MB/s): 6867.47 | Mem Ch 3: Reads (MB/s): 7403.66 | +| Writes(MB/s): 1805.53 | Writes(MB/s): 1950.95 | +| | | +| NODE0 Mem Read (MB/s) : 27478.96 | NODE1 Mem Read (MB/s) : 29624.51 | +| NODE0 Mem Write (MB/s): 7225.79 | NODE1 Mem Write (MB/s): 7811.36 | +| NODE0 P. Write (T/s) : 214810 | NODE1 P. Write (T/s) : 238294 | +| NODE0 Memory (MB/s) : 34704.75 | NODE1 Memory (MB/s) : 37435.87 | ++---------------------------------------+---------------------------------------+ +| - System Read Throughput(MB/s): 57103.47 | +| - System Write Throughput(MB/s): 15037.15 | +| - System Memory Throughput(MB/s): 72140.62 | ++-------------------------------------------------------------------------------+ + +pcm-memory.x in KVM4NFV: +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +pcm-memory is a part of KVM4NFV in D release.pcm-memory.x will be executed with delay of 60 seconds +before starting every testcase to monitor the memory traffic intensity which was handled in +collect_MBWInfo function .The memory bandwidth information will be collected into the logs through +the testcase updating every 60 seconds. + + **Pre-requisites:** + + 1.Check for the processors supported by PCM .Latest pcm utility version (2.11)support Intel® Xeon® E5 v4 processor family. + + 2.Disabling NMI Watch Dog + + 3.Installing MSR registers + + +Memory Bandwidth logs for KVM4NFV can be found `here`_: + +.. code:: bash + + http://artifacts.opnfv.org/kvmfornfv.html + +.. _here: http://artifacts.opnfv.org/kvmfornfv.html + +Details of the function implemented: + +In install_Pcm function, it handles the installation of pcm utility and the required prerequisites for pcm-memory.x tool to execute. + +.. code:: bash + + $ git clone https://github.com/opcm/pcm + $ cd pcm + $ make + +In collect_MBWInfo Function,the below command is executed on the node which was collected to the logs +with the timestamp and testType.The function will be called at the begining of each testcase and +signal will be passed to terminate the pcm-memory process which was executing throughout the cyclic testcase. + +.. code:: bash + + $ pcm-memory.x 60 &>/root/MBWInfo/MBWInfo_${testType}_${timeStamp} + + where, + ${testType} = verify (or) daily + +Future Scope +------------ +PCM information will be added to cyclictest of kvm4nfv in yardstick. diff --git a/docs/release/userguide/tuning.userguide.rst b/docs/release/userguide/tuning.userguide.rst new file mode 100644 index 000000000..3673ae2d4 --- /dev/null +++ b/docs/release/userguide/tuning.userguide.rst @@ -0,0 +1,102 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. + +.. http://creativecommons.org/licenses/by/4.0 + +Low Latency Tunning Suggestion +============================== + +The correct configuration is critical for improving the NFV +performance/latency.Even working on the same codebase, configurations can cause +wildly different performance/latency results. + +There are many combinations of configurations, from hardware configuration to +Operating System configuration and application level configuration. And there +is no one simple configuration that works for every case. To tune a specific +scenario, it's important to know the behaviors of different configurations and +their impact. + +Platform Configuration +---------------------- + +Some hardware features can be configured through firmware interface(like BIOS) +but others may not be configurable (e.g. SMI on most platforms). + +* **Power management:** + Most power management related features save power at the + expensive of latency. These features include: Intel®Turbo Boost Technology, + Enhanced Intel®SpeedStep, Processor C state and P state. Normally they + should be disabled but, depending on the real-time application design and + latency requirements, there might be some features that can be enabled if + the impact on deterministic execution of the workload is small. + +* **Hyper-Threading:** + The logic cores that share resource with other logic cores can introduce + latency so the recommendation is to disable this feature for realtime use + cases. + +* **Legacy USB Support/Port 60/64 Emulation:** + These features involve some emulation in firmware and can introduce random + latency. It is recommended that they are disabled. + +* **SMI (System Management Interrupt):** + SMI runs outside of the kernel code and can potentially cause + latency. It is a pity there is no simple way to disable it. Some vendors may + provide related switches in BIOS but most machines do not have this + capability. + +Operating System Configuration +------------------------------ + +* **CPU isolation:** + To achieve deterministic latency, dedicated CPUs should be allocated for + realtime application. This can be achieved by isolating cpus from kernel + scheduler. Please refer to + http://lxr.free-electrons.com/source/Documentation/kernel-parameters.txt#L1608 + for more information. + +* **Memory allocation:** + Memory shoud be reserved for realtime applications and usually hugepage + should be used to reduce page fauts/TLB misses. + +* **IRQ affinity:** + All the non-realtime IRQs should be affinitized to non realtime CPUs to + reduce the impact on realtime CPUs. Some OS distributions contain an + irqbalance daemon which balances the IRQs among all the cores dynamically. + It should be disabled as well. + +* **Device assignment for VM:** + If a device is used in a VM, then device passthrough is desirable. In this + case,the IOMMU should be enabled. + +* **Tickless:** + Frequent clock ticks cause latency. CONFIG_NOHZ_FULL should be enabled in + the linux kernel. With CONFIG_NOHZ_FULL, the physical CPU will trigger many + fewer clock tick interrupts(currently, 1 tick per second). This can reduce + latency because each host timer interrupt triggers a VM exit from guest to + host which causes performance/latency impacts. + +* **TSC:** + Mark TSC clock source as reliable. A TSC clock source that seems to be + unreliable causes the kernel to continuously enable the clock source + watchdog to check if TSC frequency is still correct. On recent Intel + platforms with Constant TSC/Invariant TSC/Synchronized TSC, the TSC is + reliable so the watchdog is useless but cause latency. + +* **Idle:** + The poll option forces a polling idle loop that can slightly improve the + performance of waking up an idle CPU. + +* **RCU_NOCB:** + RCU is a kernel synchronization mechanism. Refer to + http://lxr.free-electrons.com/source/Documentation/RCU/whatisRCU.txt for more + information. With RCU_NOCB, the impact from RCU to the VNF will be reduced. + +* **Disable the RT throttling:** + RT Throttling is a Linux kernel mechanism that + occurs when a process or thread uses 100% of the core, leaving no resources + for the Linux scheduler to execute the kernel/housekeeping tasks. RT + Throttling increases the latency so should be disabled. + +* **NUMA configuration:** + To achieve the best latency. CPU/Memory and device allocated for realtime + application/VM should be in the same NUMA node. |