summaryrefslogtreecommitdiffstats
path: root/docs/testing
diff options
context:
space:
mode:
Diffstat (limited to 'docs/testing')
-rw-r--r--docs/testing/developer/devguide/dev-guide.rst142
-rw-r--r--docs/testing/ecosystem/energy-monitoring.rst260
-rw-r--r--docs/testing/ecosystem/overview.rst186
3 files changed, 41 insertions, 547 deletions
diff --git a/docs/testing/developer/devguide/dev-guide.rst b/docs/testing/developer/devguide/dev-guide.rst
index c1d39dd45..5cf9b94d2 100644
--- a/docs/testing/developer/devguide/dev-guide.rst
+++ b/docs/testing/developer/devguide/dev-guide.rst
@@ -1,10 +1,6 @@
.. This work is licensed under a Creative Commons Attribution 4.0 International License.
.. SPDX-License-Identifier: CC-BY-4.0
-***********************************************************
-NOTE - This file will be updated during the Lakelse Release
-***********************************************************
-
***********************
Testing developer guide
***********************
@@ -18,12 +14,9 @@ Testing developer guide
Introduction
============
-The OPNFV testing ecosystem is wide.
-
The goal of this guide consists in providing some guidelines for new developers
involved in test areas.
-For the description of the ecosystem, see `[DEV1]`_.
=================
Developer journey
@@ -42,11 +35,11 @@ resource accross the different projects.
If you develop new test cases, the best practice is to contribute upstream as
much as possible. You may contact the testing group to know which project - in
-OPNFV or upstream - would be the best place to host the test cases. Such
+Anuket or upstream - would be the best place to host the test cases. Such
contributions are usually directly connected to a specific project, more details
can be found in the user guides of the testing projects.
-Each OPNFV testing project provides test cases and the framework to manage them.
+Each Anuket testing project provides test cases and the framework to manage them.
As a developer, you can obviously contribute to them. The developer guide of
the testing projects shall indicate the procedure to follow.
@@ -59,18 +52,6 @@ event is organized after each release. Most of the test projects are present.
The summit is also a good opportunity to meet most of the actors `[DEV4]`_.
-Be involved in the testing group
-================================
-
-The testing group is a self organized working group. The OPNFV projects dealing
-with testing are invited to participate in order to elaborate and consolidate a
-consistant test strategy (test case definition, scope of projects, resources for
-long duration, documentation, ...) and align tooling or best practices.
-
-A weekly meeting is organized, the agenda may be amended by any participant.
-2 slots have been defined (US/Europe and APAC). Agendas and minutes are public.
-See `[DEV3]`_ for details.
-The testing group IRC channel is #opnfv-testperf
Best practices
==============
@@ -134,8 +115,7 @@ possible to prepare the environement and run tests through a CLI.
Dockerization
-------------
Dockerization has been introduced in Brahmaputra and adopted by most of the test
-projects. Docker containers are pulled on the jumphost of OPNFV POD.
-<TODO Jose/Mark/Alec>
+projects.
Code quality
------------
@@ -145,8 +125,7 @@ and more precisely to implement some verifications before any merge:
* pep8
* pylint
-* unit tests (python 2.7)
-* unit tests (python 3.5)
+* unit tests
The code of the test project must be covered by unit tests. The coverage
shall be reasonable and not decrease when adding new features to the framework.
@@ -164,39 +143,13 @@ and/or traffic generation. Some of the tools can be listed as follows:
+---------------+----------------------+------------------------------------+
| Project | Tool | Comments |
+===============+======================+====================================+
-| Bottlenecks | TODO | |
-+---------------+----------------------+------------------------------------+
| Functest | Tempest | OpenStack test tooling |
| | Rally | OpenStack test tooling |
| | Refstack | OpenStack test tooling |
| | RobotFramework | Used for ODL tests |
+---------------+----------------------+------------------------------------+
-| QTIP | Unixbench | |
-| | RAMSpeed | |
-| | nDPI | |
-| | openSSL | |
-| | inxi | |
-+---------------+----------------------+------------------------------------+
-| Storperf | TODO | |
-+---------------+----------------------+------------------------------------+
| VSPERF | TODO | |
+---------------+----------------------+------------------------------------+
-| Yardstick | Moongen | Traffic generator |
-| | Trex | Traffic generator |
-| | Pktgen | Traffic generator |
-| | IxLoad, IxNet | Traffic generator |
-| | SPEC | Compute |
-| | Unixbench | Compute |
-| | RAMSpeed | Compute |
-| | LMBench | Compute |
-| | Iperf3 | Network |
-| | Netperf | Network |
-| | Pktgen-DPDK | Network |
-| | Testpmd | Network |
-| | L2fwd | Network |
-| | Fio | Storage |
-| | Bonnie++ | Storage |
-+---------------+----------------------+------------------------------------+
======================================
@@ -216,7 +169,7 @@ categories can be used to group test suites.
+----------------+-------------------------------------------------------------+
| Smoke | Set of smoke test cases/suites to validate the release |
+----------------+-------------------------------------------------------------+
-| Features | Test cases that validate a specific feature on top of OPNFV.|
+| Features | Test cases that validate a specific feature on top of Anuket|
| | Those come from Feature projects and need a bit of support |
| | for integration |
+----------------+-------------------------------------------------------------+
@@ -279,85 +232,19 @@ impairments to transmission.
These kinds of "load" will cause "disruption" which could be easily found in
system logs. It is the purpose to raise such "load" to evaluate the SUT if it
could provide an acceptable level of service or level of confidence during such
-circumstances. In Danube and Euphrates, we only considered the stress test with
-excess load over OPNFV Platform.
-
-In Danube, Bottlenecks and Yardstick project jointly implemented 2 stress tests
-(concurrently create/destroy VM pairs and do ping, system throughput limit)
-while Bottlenecks acts as the load manager calling yardstick to execute each
-test iteration. These tests are designed to test for breaking points and provide
-level of confidence of the system to users. Summary of the test cases are listed
-in the following addresses:
-
- * https://wiki.opnfv.org/display/bottlenecks/Stress+Testing+over+OPNFV+Platform
- * https://wiki.opnfv.org/download/attachments/2926539/Testing%20over%20Long%20Duration%20POD.pptx?version=2&modificationDate=1502943821000&api=v2
-
-**Stress test cases** for OPNFV Euphrates (OS Ocata) release can be seen as
-extension/enhancement of those in D release. These tests are located in
-Bottlenecks/Yardstick repo (Bottlenecks as load manager while Yardstick execute
-each test iteration):
-
- * VNF scale out/up tests (also plan to measure storage usage simultaneously): https://wiki.opnfv.org/pages/viewpage.action?pageId=12390101
- * Life-cycle event with throughputs (measure NFVI to support concurrent
- network usage from different VM pairs):
- https://wiki.opnfv.org/display/DEV/Intern+Project%3A+Baseline+Stress+Test+Case+for+Bottlenecks+E+Release
-
-In OPNFV E release, we also plan to do **long duration testing** over OS Ocata.
-A separate CI pipe testing OPNFV XCI (OSA) is proposed to accomplish the job.
-We have applied specific pod for the testing.
-Proposals and details are listed below:
-
-* https://wiki.opnfv.org/display/testing/Euphrates+Testing+needs
-* https://wiki.opnfv.org/download/attachments/2926539/testing%20evolution%20v1_4.pptx?version=1&modificationDate=1503937629000&api=v2
-* https://wiki.opnfv.org/download/attachments/2926539/Testing%20over%20Long%20Duration%20POD.pptx?version=2&modificationDate=1502943821000&api=v2
-
-The long duration testing is supposed to be started when OPNFV E release is
-published.
-A simple monitoring module for these tests is also planned to be added:
-https://wiki.opnfv.org/display/DEV/Intern+Project%3A+Monitoring+Stress+Testing+for+Bottlenecks+E+Release
+circumstances.
=======
How TOs
=======
-Where can I find information on the different test projects?
-============================================================
-On http://docs.opnfv.org! A section is dedicated to the testing projects. You
-will find the overview of the ecosystem and the links to the project documents.
-
-Another source is the testing wiki on https://wiki.opnfv.org/display/testing
-
-You may also contact the testing group on the IRC channel #opnfv-testperf or by
-mail at test-wg AT lists.opnfv.org (testing group) or opnfv-tech-discuss AT
-lists.opnfv.org (generic technical discussions).
-
-
How can I contribute to a test project?
=======================================
As any project, the best solution is to contact the project. The project
members with their email address can be found under
https://git.opnfv.org/<project>/tree/INFO
-You may also send a mail to the testing mailing list or use the IRC channel
-#opnfv-testperf
-
-
-Where can I find hardware resources?
-====================================
-You should discuss this topic with the project you are working with. If you need
-access to an OPNFV community POD, it is possible to contact the infrastructure
-group. Depending on your needs (scenario/installer/tooling), it should be
-possible to find free time slots on one OPNFV community POD from the Pharos
-federation. Create a JIRA ticket to describe your needs on
-https://jira.opnfv.org/projects/INFRA.
-You must already be an OPNFV contributor. See
-https://wiki.opnfv.org/display/DEV/Developer+Getting+Started.
-
-Please note that lots of projects have their own "how to contribute" or
-"get started" page on the OPNFV wiki.
-
-
How do I integrate my tests in CI?
==================================
It shall be discussed directly with the project you are working with. It is
@@ -403,8 +290,6 @@ The architecture and associated API is described in previous chapter.
If you want to push your results from CI, you just have to call the API
at the end of your script.
-You can also reuse a python function defined in functest_utils.py `[DEV2]`_
-
Where can I find the documentation on the test API?
===================================================
@@ -412,8 +297,6 @@ Where can I find the documentation on the test API?
The Test API is now documented in this document (see sections above).
You may also find autogenerated documentation in
http://artifacts.opnfv.org/releng/docs/testapi.html
-A web protal is also under construction for certification at
-http://testresults.opnfv.org/test/#/
I have tests, to which category should I declare them?
======================================================
@@ -448,20 +331,9 @@ http://artifacts.opnfv.org/<project name>
References
==========
-`[DEV1]`_: OPNFV Testing Ecosystem
-
-`[DEV2]`_: Python code sample to push results into the Database
-
-`[DEV3]`_: Testing group wiki page
-
`[DEV4]`_: Conversation with the testing community, OPNFV Beijing Summit
`[DEV5]`_: GS NFV 003
-.. _`[DEV1]`: http://docs.opnfv.org/en/latest/testing/ecosystem/index.html
-.. _`[DEV2]`: https://git.opnfv.org/functest/tree/functest/utils/functest_utils.py#176
-.. _`[DEV3]`: https://wiki.opnfv.org/display/meetings/Test+Working+Group+Weekly+Meeting
.. _`[DEV4]`: https://www.youtube.com/watch?v=f9VAUdEqHoA
-.. _`[DEV5]`: http://www.etsi.org/deliver/etsi_gs/NFV/001_099/003/01.01.01_60/gs_NFV003v010101p.pdf
-
-IRC support chan: #opnfv-testperf
+.. _`[DEV5]`: http://www.etsi.org/deliver/etsi_gs/NFV/001_099/003/01.01.01_60/gs_NFV003v010101p.pdf \ No newline at end of file
diff --git a/docs/testing/ecosystem/energy-monitoring.rst b/docs/testing/ecosystem/energy-monitoring.rst
deleted file mode 100644
index b47f044bf..000000000
--- a/docs/testing/ecosystem/energy-monitoring.rst
+++ /dev/null
@@ -1,260 +0,0 @@
-.. _energy-monitoring:
-
-.. This work is licensed under a Creative Commons Attribution 4.0 International License.
-.. SPDX-License-Identifier: CC-BY-4.0
-.. (c) Open Platform for NFV Project, Inc. and its contributors
-
-Power Consumption Monitoring Framework
-======================================
-
-Overview
---------
-Power consumption is a key driver for NFV.
-As an end user is interested to know which application is good or bad regarding
-power consumption and explains why he/she has to plug his/her smartphone every
-day, we would be interested to know which VNF is power consuming.
-
-Power consumption is hard to evaluate empirically. It is however possible to
-collect information and leverage Pharos federation to try to detect some
-profiles/footprints.
-In fact thanks to CI, we know that we are running a known/deterministic list of
-cases. The idea is to correlate this knowledge with the power consumption to try
-at the end to find statistical biais.
-
-
-High Level Architecture
------------------------
-
-The energy recorder high level architecture may be described as follows:
-
-.. figure:: ../../images/energyrecorder.png
- :align: center
- :alt: Energy recorder high level architecture
-
-The energy monitoring system in based on 3 software components:
-
- * Power info collector: poll server to collect instantaneous power consumption information
- * Energy recording API + influxdb: On one leg receive servers consumption and
- on the other, scenarios notfication. It then able to establish te correlation
- between consumption and scenario and stores it into a time-series database (influxdb)
- * Python SDK: A Python SDK using decorator to send notification to Energy
- recording API from testcases scenarios
-
-Power Info Collector
---------------------
-It collects instantaneous power consumption information and send it to Event
-API in charge of data storing.
-The collector use different connector to read the power consumption on remote
-servers:
-
- * IPMI: this is the basic method and is manufacturer dependent. Depending on manufacturer, refreshing delay may vary (generally for 10 to 30 sec.)
- * RedFish: redfish is an industry RESTFUL API for hardware managment. Unfortunatly it is not yet supported by many suppliers.
- * ILO: HP RESTFULL API: This connector support as well 2.1 as 2.4 version of HP-ILO
-
-IPMI is supported by at least:
-
- * HP
- * IBM
- * Dell
- * Nokia
- * Advantech
- * Lenovo
- * Huawei
-
-Redfish API has been successfully tested on:
-
- * HP
- * Dell
- * Huawei (E9000 class servers used in OPNFV Community Labs are IPMI 2.0
- compliant and use Redfish login Interface through Browsers supporting JRE1.7/1.8)
-
-Several test campaigns done with physical Wattmeter showed that IPMI results
-were notvery accurate but RedFish were. So if Redfish is available, it is
-highly recommended to use it.
-
-Installation
-^^^^^^^^^^^^
-
-To run the server power consumption collector agent, you need to deploy a
-docker container locally on your infrastructure.
-
-This container requires:
-
- * Connectivy on the LAN where server administration services (ILO, eDrac, IPMI,...) are configured and IP access to the POD's servers
- * Outgoing HTTP access to the Event API (internet)
-
-Build the image by typing::
-
- curl -s https://raw.githubusercontent.com/bherard/energyrecorder/master/docker/server-collector.dockerfile|docker build -t energyrecorder/collector -
-
-Create local folder on your host for logs and config files::
-
- mkdir -p /etc/energyrecorder
- mkdir -p /var/log/energyrecorder
-
-In /etc/energyrecorder create a configuration for logging in a file named
-collector-logging.conf::
-
- curl -s https://raw.githubusercontent.com/bherard/energyrecorder/master/server-collector/conf/collector-logging.conf.sample > /etc/energyrecorder/collector-logging.conf
-
-Check configuration for this file (folders, log levels.....)
-In /etc/energyrecorder create a configuration for the collector in a file named
-collector-settings.yaml::
-
- curl -s https://raw.githubusercontent.com/bherard/energyrecorder/master/server-collector/conf/collector-settings.yaml.sample > /etc/energyrecorder/collector-settings.yaml
-
-Define the "PODS" section and their "servers" section according to the
-environment to monitor.
-Note: The "environment" key should correspond to the pod name, as defined in
-the "NODE_NAME" environment variable by CI when running.
-
-**IMPORTANT NOTE**: To apply a new configuration, you need to kill the running
-container an start a new one (see below)
-
-Run Collector
-^^^^^^^^^^^^^
-
-To run the container, you have to map folder located on the host to folders in
-the container (config, logs)::
-
- docker run -d --name energy-collector --restart=always -v /etc/energyrecorder:/usr/local/energyrecorder/server-collector/conf -v /var/log/energyrecorder:/var/log/energyrecorder energyrecorder/collector
-
-
-Energy Recording API
---------------------
-An event API to insert contextual information when monitoring energy (e.g.
-start Functest, start Tempest, destroy VM, ..)
-It is associated with an influxDB to store the power consumption measures
-It is hosted on a shared environment with the folling access points:
-
-+------------------------------------+----------------------------------------+
-| Component | Connectivity |
-+====================================+========================================+
-| Energy recording API documentation | http://energy.opnfv.fr/resources/doc/ |
-+------------------------------------+----------------------------------------+
-| influxDB (data) | http://energy.opnfv.fr:8086 |
-+------------------------------------+----------------------------------------+
-
-In you need, you can also host your own version of the Energy recording API
-(in such case, the Python SDK may requires a settings update)
-If you plan to use the default shared API, following steps are not required.
-
-Image creation
-^^^^^^^^^^^^^^
-First, you need to buid an image::
-
- curl -s https://raw.githubusercontent.com/bherard/energyrecorder/master/docker/recording-api.dockerfile|docker build -t energyrecorder/api -
-
-Setup
-^^^^^
-Create local folder on your host for logs and config files::
-
- mkdir -p /etc/energyrecorder
- mkdir -p /var/log/energyrecorder
- mkdir -p /var/lib/influxdb
-
-In /etc/energyrecorder create a configuration for logging in a file named
-webapp-logging.conf::
-
- curl -s https://raw.githubusercontent.com/bherard/energyrecorder/master/recording-api/conf/webapp-logging.conf.sample > /etc/energyrecorder/webapp-logging.conf
-
-Check configuration for this file (folders, log levels.....)
-
-In /etc/energyrecorder create a configuration for the collector in a file
-named webapp-settings.yaml::
-
- curl -s https://raw.githubusercontent.com/bherard/energyrecorder/master/recording-api/conf/webapp-settings.yaml.sample > /etc/energyrecorder/webapp-settings.yaml
-
-Normaly included configuration is ready to use except username/passwer for
-influx (see run-container.sh bellow). Use here the admin user.
-
-**IMPORTANT NOTE**: To apply a new configuration, you need to kill the running
-container an start a new one (see bellow)
-
-Run API
-^^^^^^^
-To run the container, you have to map folder located on the host to folders in
-the container (config, logs)::
-
- docker run -d --name energyrecorder-api -p 8086:8086 -p 8888:8888 -v /etc/energyrecorder:/usr/local/energyrecorder/web.py/conf -v /var/log/energyrecorder/:/var/log/energyrecorder -v /var/lib/influxdb:/var/lib/influxdb energyrecorder/webapp admin-influx-user-name admin-password readonly-influx-user-name user-password
-
-with
-
-+---------------------------+--------------------------------------------+
-| Parameter name | Description |
-+===========================+============================================+
-| admin-influx-user-name | Influx user with admin grants to create |
-+---------------------------+--------------------------------------------+
-| admin-password | Influx password to set to admin user |
-+---------------------------+--------------------------------------------+
-| readonly-influx-user-name | Influx user with readonly grants to create |
-+---------------------------+--------------------------------------------+
-| user-password | Influx password to set to readonly user |
-+---------------------------+--------------------------------------------+
-
-**NOTE**: Local folder /var/lib/influxdb is the location web influx data are
-stored. You may used anything else at your convience. Just remember to define
-this mapping properly when running the container.
-
-Power consumption Python SDK
-----------------------------
-a Python SDK - almost not intrusive, based on python decorator to trigger call
-to the event API.
-
-It is currently hosted in Functest repo but if other projects adopt it, a
-dedicated project could be created and/or it could be hosted in Releng.
-
-How to use the SDK
-^^^^^^^^^^^^^^^^^^
-
-import the energy library::
-
- import functest.energy.energy as energy
-
-Notify that you want power recording in your testcase::
-
- @energy.enable_recording
- def run(self):
- self.do_some_stuff1()
- self.do_some_stuff2()
-
-If you want to register additional steps during the scenarios you can to it in
-2 different ways.
-
-Notify step on method definition::
-
- @energy.set_step("step1")
- def do_some_stuff1(self):
- ...
- @energy.set_step("step2")
- def do_some_stuff2(self):
-
-Notify directly from code::
-
- @energy.enable_recording
- def run(self):
- Energy.set_step("step1")
- self.do_some_stuff1()
- ...
- Energy.set_step("step2")
- self.do_some_stuff2()
-
-SDK Setting
-^^^^^^^^^^^
-Settings delivered in the project git are ready to use and assume that you will
-use the sahre energy recording API.
-If you want to use an other instance, you have to update the key
-"energy_recorder.api_url" in <FUNCTEST>/functest/ci/config_functest.yaml" by
-setting the proper hostname/IP
-
-Results
--------
-Here is an example of result comming from LF POD2. This sequence represents
-several CI runs in a raw. (0 power corresponds to hard reboot of the servers)
-
-You may connect http://energy.opnfv.fr:3000 for more results (ask for
-credentials to infra team).
-
-.. figure:: ../../images/energy_LF2.png
- :align: center
- :alt: Energy monitoring of LF POD2
diff --git a/docs/testing/ecosystem/overview.rst b/docs/testing/ecosystem/overview.rst
index 309d6268c..b67cac24d 100644
--- a/docs/testing/ecosystem/overview.rst
+++ b/docs/testing/ecosystem/overview.rst
@@ -4,132 +4,67 @@
.. SPDX-License-Identifier: CC-BY-4.0
======================
-OPNFV Testing Overview
+Anuket Testing Overview
======================
Introduction
============
-Testing is one of the key activities in OPNFV and includes unit, feature,
+Testing is one of the key activities in Anuket and includes unit, feature,
component, system level testing for development, automated deployment,
performance characterization and stress testing.
Test projects are dedicated to provide frameworks, tooling and test-cases categorized as
functional, performance or compliance testing. Test projects fulfill different roles such as
verifying VIM functionality, benchmarking components and platforms or analysis of measured
-KPIs for OPNFV release scenarios.
+KPIs for Anuket release scenarios.
Feature projects also provide their own test suites that either run independently or within a
test project.
-This document details the OPNFV testing ecosystem, describes common test components used
-by individual OPNFV projects and provides links to project specific documentation.
+This document details the Anuket testing ecosystem, describes common test components used
+by individual Anuket projects and provides links to project specific documentation.
-The OPNFV Testing Ecosystem
+The Anuket Testing Ecosystem
===========================
-The OPNFV testing projects are represented in the following diagram:
+The Anuket testing projects are represented in the following diagram:
.. figure:: ../../images/OPNFV_testing_working_group.png
:align: center
- :alt: Overview of OPNFV Testing projects
+ :alt: Overview of Anuket Testing projects
The major testing projects are described in the table below:
+----------------+---------------------------------------------------------+
| Project | Description |
+================+=========================================================+
-| Bottlenecks | This project aims to find system bottlenecks by testing |
-| | and verifying OPNFV infrastructure in a staging |
-| | environment before committing it to a production |
-| | environment. Instead of debugging a deployment in |
-| | production environment, an automatic method for |
-| | executing benchmarks which plans to validate the |
-| | deployment during staging is adopted. This project |
-| | forms a staging framework to find bottlenecks and to do |
-| | analysis of the OPNFV infrastructure. |
-+----------------+---------------------------------------------------------+
-| CPerf | SDN Controller benchmarks and performance testing, |
-| | applicable to controllers in general. Collaboration of |
-| | upstream controller testing experts, external test tool |
-| | developers and the standards community. Primarily |
-| | contribute to upstream/external tooling, then add jobs |
-| | to run those tools on OPNFV's infrastructure. |
-+----------------+---------------------------------------------------------+
-| Dovetail | This project intends to define and provide a set of |
-| | OPNFV related validation criteria/tests that will |
-| | provide input for the OPNFV Complaince Verification |
-| | Program. The Dovetail project is executed with the |
-| | guidance and oversight of the Complaince and |
-| | Certification (C&C) committee and work to secure the |
-| | goals of the C&C committee for each release. The |
-| | project intends to incrementally define qualification |
-| | criteria that establish the foundations of how one is |
-| | able to measure the ability to utilize the OPNFV |
-| | platform, how the platform itself should behave, and |
-| | how applications may be deployed on the platform. |
-+----------------+---------------------------------------------------------+
| Functest | This project deals with the functional testing of the |
| | VIM and NFVI. It leverages several upstream test suites |
| | (OpenStack, ODL, ONOS, etc.) and can be used by feature |
| | project to launch feature test suites in CI/CD. |
| | The project is used for scenario validation. |
+----------------+---------------------------------------------------------+
-| NFVbench | NFVbench is a compact and self contained data plane |
-| | performance measurement tool for OpensStack based NFVi |
-| | platforms. It is agnostic of the NFVi distribution, |
-| | Neutron networking implementation and hardware. |
-| | It runs on any Linux server with a DPDK compliant |
-| | NIC connected to the NFVi platform data plane and |
-| | bundles a highly efficient software traffic generator. |
-| | Provides a fully automated measurement of most common |
-| | packet paths at any level of scale and load using |
-| | RFC-2544. Available as a Docker container with simple |
-| | command line and REST interfaces. |
-| | Easy to use as it takes care of most of the guesswork |
-| | generally associated to data plane benchmarking. |
-| | Can run in any lab or in production environments. |
-+----------------+---------------------------------------------------------+
-| QTIP | QTIP as the project for "Platform Performance |
-| | Benchmarking" in OPNFV aims to provide user a simple |
-| | indicator for performance, supported by comprehensive |
-| | testing data and transparent calculation formula. |
-| | It provides a platform with common services for |
-| | performance benchmarking which helps users to build |
-| | indicators by themselves with ease. |
-+----------------+---------------------------------------------------------+
-| StorPerf | The purpose of this project is to provide a tool to |
-| | measure block and object storage performance in an NFVI.|
-| | When complemented with a characterization of typical VF |
-| | storage performance requirements, it can provide |
-| | pass/fail thresholds for test, staging, and production |
-| | NFVI environments. |
-+----------------+---------------------------------------------------------+
-| VSPERF | VSPERF is an OPNFV project that provides an automated |
-| | test-framework and comprehensive test suite based on |
-| | Industry Test Specifications for measuring NFVI |
-| | data-plane performance. The data-path includes switching|
-| | technologies with physical and virtual network |
-| | interfaces. The VSPERF architecture is switch and |
-| | traffic generator agnostic and test cases can be easily |
-| | customized. Software versions and configurations |
-| | including the vSwitch (OVS or VPP) as well as the |
-| | network topology are controlled by VSPERF (independent |
-| | of OpenStack). VSPERF is used as a development tool for |
-| | optimizing switching technologies, qualification of |
-| | packet processing components and for pre-deployment |
-| | evaluation of the NFV platform data-path. |
-+----------------+---------------------------------------------------------+
-| Yardstick | The goal of the Project is to verify the infrastructure |
-| | compliance when running VNF applications. NFV Use Cases |
-| | described in ETSI GS NFV 001 show a large variety of |
-| | applications, each defining specific requirements and |
-| | complex configuration on the underlying infrastructure |
-| | and test tools.The Yardstick concept decomposes typical |
-| | VNF work-load performance metrics into a number of |
-| | characteristics/performance vectors, which each of them |
-| | can be represented by distinct test-cases. |
+|  ViNePerf | ViNePerf provides an automated test-framework and |
+| | comprehensive test suite based on industry standards for|
+| | measuring the data-plane performance in different cloud |
+| | environments. Dataplane in a cloud includes different |
+| | switching technologies with physical and virtual network|
+| | interfaces, and carries traffic to and from workloads |
+| | running as virtual-machines and containers. The |
+| | architecture of ViNePerf is agnostic of cloud-type, |
+| | switching-technology, and traffic-generator. ViNePerf |
+| | allows user to customize the test-cases, |
+| | network-topology, workload-deployment, hardware- |
+| | configuration, and the versions of the software |
+| | components such vswitch, vnf, cnf, cni, etc. ViNePerf |
+| | can be used both pre-deployment and post-deployment of |
+| | the cloud. Though ViNePerf architecture is designed for |
+| | evaluation of dataplane of clouds in Lab environments, |
+| | it can also be in production clouds. ViNePerf methods |
+| | follows standards developed by the IETF and ETSI NFV, |
+| | and contribute to the development of new standards. |
+----------------+---------------------------------------------------------+
@@ -140,8 +75,8 @@ Testing Working Group Resources
Test Results Collection Framework
=================================
-Any test project running in the global OPNFV lab infrastructure and is
-integrated with OPNFV CI can push test results to the community Test Database
+Any test project running in the global Anuket lab infrastructure and is
+integrated with Anuket CI can push test results to the community Test Database
using a common Test API. This database can be used to track the evolution of
testing and analyse test runs to compare results across installers, scenarios
and between technically and geographically diverse hardware environments.
@@ -195,12 +130,12 @@ The following collections are declared in this database:
* projects: the list of projects providing test cases
* test cases: the test cases related to a given project
* results: the results of the test cases
-* scenarios: the OPNFV scenarios tested in CI
+* scenarios: the Anuket scenarios tested in CI
This database can be used by any project through the Test API.
Please note that projects may also use additional databases. The Test
Database is mainly use to collect CI test results and generate scenario
-trust indicators. The Test Database is also cloned for OPNFV Plugfests in
+trust indicators. The Test Database is also cloned for Anuket Plugfests in
order to provide a private datastore only accessible to Plugfest participants.
@@ -265,7 +200,7 @@ The reporting page for the test projects is http://testresults.opnfv.org/reporti
:align: center
:alt: Testing group reporting page
-This page provides reporting per OPNFV release and per testing project.
+This page provides reporting per Anuket release and per testing project.
.. figure:: ../../images/reportingMaster.png
:align: center
@@ -310,63 +245,13 @@ contains raw results.
The dashboard can be used in addition to the reporting page (high level view) to allow
the creation of specific graphs according to what the test owner wants to show.
-In Brahmaputra, a basic dashboard was created in Functest.
-In Colorado, Yardstick used Grafana (time based graphs) and ELK (complex
-graphs).
-Since Danube, the OPNFV testing community decided to adopt the ELK framework and to
-use Bitergia for creating highly flexible dashboards `[TST5]`_.
-
-.. figure:: ../../images/DashboardBitergia.png
- :align: center
- :alt: Testing group testcase catalog
-
-
-.. include:: ./energy-monitoring.rst
-
-
-OPNFV Test Group Information
-============================
-
-For more information or to participate in the OPNFV test community please see the
-following:
-
-wiki: https://wiki.opnfv.org/testing
-
-mailing list: test-wg@lists.opnfv.org
-
-IRC channel: #opnfv-testperf
-
-weekly meeting (https://wiki.opnfv.org/display/meetings/TestPerf):
- * Usual time: Every Thursday 15:00-16:00 UTC / 7:00-8:00 PST
=======================
Reference Documentation
=======================
-+----------------+---------------------------------------------------------+
-| Project | Documentation links |
-+================+=========================================================+
-| Bottlenecks | https://wiki.opnfv.org/display/bottlenecks/Bottlenecks |
-+----------------+---------------------------------------------------------+
-| CPerf | https://wiki.opnfv.org/display/cperf |
-+----------------+---------------------------------------------------------+
-| Dovetail | https://wiki.opnfv.org/display/dovetail |
-+----------------+---------------------------------------------------------+
-| Functest | https://wiki.opnfv.org/display/functest/ |
-+----------------+---------------------------------------------------------+
-| NFVbench | https://wiki.opnfv.org/display/nfvbench/ |
-+----------------+---------------------------------------------------------+
-| QTIP | https://wiki.opnfv.org/display/qtip |
-+----------------+---------------------------------------------------------+
-| StorPerf | https://wiki.opnfv.org/display/storperf/Storperf |
-+----------------+---------------------------------------------------------+
-| VSPERF | https://wiki.opnfv.org/display/vsperf |
-+----------------+---------------------------------------------------------+
-| Yardstick | https://wiki.opnfv.org/display/yardstick/Yardstick |
-+----------------+---------------------------------------------------------+
-
-`[TST1]`_: OPNFV web site
+`[TST1]`_: Anuket web site
`[TST2]`_: TestAPI code repository link in releng-testresults
@@ -374,10 +259,7 @@ Reference Documentation
`[TST4]`_: Testcase catalog
-`[TST5]`_: Testing group dashboard
-
-.. _`[TST1]`: http://www.opnfv.org
+.. _`[TST1]`: http://www.anuket.org
.. _`[TST2]`: https://git.opnfv.org/releng-testresults
.. _`[TST3]`: http://artifacts.opnfv.org/releng/docs/testapi.html
.. _`[TST4]`: http://testresults.opnfv.org/testing/index.html#!/select/visual
-.. _`[TST5]`: https://opnfv.biterg.io:443/goto/283dba93ca18e95964f852c63af1d1ba