summaryrefslogtreecommitdiffstats
path: root/docs/testing/user/userguide
diff options
context:
space:
mode:
Diffstat (limited to 'docs/testing/user/userguide')
-rw-r--r--docs/testing/user/userguide/api_testing_guide.rst373
-rw-r--r--docs/testing/user/userguide/cli_reference.rst71
-rw-r--r--docs/testing/user/userguide/images/tocsa_vnf_test_environment.pngbin0 -> 101795 bytes
-rw-r--r--docs/testing/user/userguide/images/tosca_vnf_test_flow.pngbin0 -> 40614 bytes
-rw-r--r--docs/testing/user/userguide/index.rst2
-rw-r--r--docs/testing/user/userguide/testing_guide.rst328
-rw-r--r--docs/testing/user/userguide/vnf_test_guide.rst714
7 files changed, 1301 insertions, 187 deletions
diff --git a/docs/testing/user/userguide/api_testing_guide.rst b/docs/testing/user/userguide/api_testing_guide.rst
new file mode 100644
index 00000000..119beff7
--- /dev/null
+++ b/docs/testing/user/userguide/api_testing_guide.rst
@@ -0,0 +1,373 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, Huawei Technologies Co.,Ltd and others.
+
+===============================
+Running Dovetail by RESTful API
+===============================
+
+Overview
+--------
+
+Dovetail framework provides RESTful APIs for end users to run all OVP test cases.
+Also it provides a Swagger UI for users to find out all APIs and try them out.
+
+
+Definitions and abbreviations
+-----------------------------
+
+- REST - Representational State Transfer
+- API - Application Programming Interface
+- OVP - OPNFV Verification Program
+- UI - User Interface
+
+
+Environment Preparation
+-----------------------
+
+
+Install Docker
+^^^^^^^^^^^^^^
+
+The main prerequisite software for Dovetail is Docker. Please refer to official
+Docker installation guide that is relevant to your Test Host's operating system.
+
+
+Configuring the Test Host Environment
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+For convenience and as a convention, we will create a home directory for storing
+all Dovetail related config items and results files:
+
+.. code-block:: bash
+
+ $ mkdir -p ${HOME}/dovetail
+ $ export DOVETAIL_HOME=${HOME}/dovetail
+
+
+Installing Dovetail API
+-----------------------
+
+The Dovetail project maintains a Docker image that has both Dovetail API and
+Dovetail CLI preinstalled. This Docker image is tagged with versions.
+Before pulling the Dovetail image, check the OPNFV's OVP web page first to
+determine the right tag for OVP testing.
+
+
+Downloading Dovetail Docker Image
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The first version of Dovetail API is ovp-3.0.0.
+
+.. code-block:: bash
+
+ $ sudo docker pull opnfv/dovetail:ovp-3.0.0
+ ovp-3.0.0: Pulling from opnfv/dovetail
+ 6abc03819f3e: Pull complete
+ 05731e63f211: Pull complete
+ 0bd67c50d6be: Pull complete
+ 3f737f5d00b2: Pull complete
+ c93fd0792ebd: Pull complete
+ 77d9a9603ec6: Pull complete
+ 9463cdd9c628: Pull complete
+ Digest: sha256:45e2ffdbe217a4e6723536afb5b6a3785d318deff535da275f34cf8393af458d
+ Status: Downloaded newer image for opnfv/dovetail:ovp-3.0.0
+
+
+Deploying Dovetail API
+^^^^^^^^^^^^^^^^^^^^^^
+
+The Dovetail API can be deployed by running a Dovetail container with the Docker
+image downloaded before.
+
+.. code-block:: bash
+
+ $ docker run -itd -p <swagger_port>:80 -p <api_port>:5000 --privileged=true \
+ -e SWAGGER_HOST=<host_ip>:<api_port> -e DOVETAIL_HOME=/home/ovp \
+ -v /home/ovp:/home/ovp -v /var/run/docker.sock:/var/run/docker.sock \
+ opnfv/dovetail:<version>
+
+
+In the container, it uses 2 ports for Swagger UI (port 80) and API (port 5000)
+respectively. So in order to access to these 2 services outside the container,
+it needs to map them to the host ports. It can be any available ports in the host.
+
+The env SWAGGER_HOST is optional. If you will access the Swagger UI webpage with
+the same host deploying this container, there is no need to set SWAGGER_HOST.
+Otherwise, if you will access the Swagger UI webpage from other machines, then
+it needs to set SWAGGER_HOST.
+
+
+Using Dovetail API
+------------------
+
+Here give the guide of where to find out all APIs and how to use them.
+
+
+Swagger UI Webpage
+^^^^^^^^^^^^^^^^^^
+
+After deploying Dovetail container, the Swagger UI webpage can be accessed with
+any browser. The url is ``http://localhost:<swagger_port>/dovetail-api/index.html``
+if accessing from the same host as deploying this container. Otherwise, the url
+is ``http://<host_ip>:<swagger_port>/dovetail-api/index.html``.
+
+
+Calling APIs
+^^^^^^^^^^^^
+
+There are totally 5 APIs provided by Dovetail.
+
+ * Get all test suites
+
+ * Get all test cases
+
+ * Run test cases
+
+ * Run test cases with execution ID
+
+ * Get status of test cases
+
+Here give some easy guide of how to call these APIs. For more detailed infomation,
+please refer to the Swagger UI page.
+
+
+Getting All Test Suites
+=======================
+
+ * This is a **GET** function with no parameter to get all test suites defined
+ in Dovetail container.
+
+ * The request URL is ``http://<host_ip>:<api_port>/api/v1/scenario/nfvi/testsuites``.
+
+ * The response body is structured as:
+
+ .. code-block:: bash
+
+ {
+ "testsuites": {
+ "debug": {
+ "name": "debug",
+ "testcases_list": {
+ "optional": [
+ "functest.vping.userdata"
+ ]
+ }
+ },
+ "healthcheck": {
+ "name": "healthcheck",
+ "testcases_list": {
+ "optional": [
+ "functest.healthcheck.connection_check"
+ ]
+ }
+ }
+ }
+ }
+
+
+Getting All Test Cases
+======================
+
+ * This is a **GET** function without no parameter to get all test cases integrated
+ in Dovetail container.
+
+ * The request URL is ``http://<host_ip>:<api_port>/api/v1/scenario/nfvi/testcases``.
+
+ * The response body is structured as:
+
+ .. code-block:: bash
+
+ {
+ "testcases": [
+ {
+ "description": "This test case will verify the high availability of the user service provided by OpenStack (keystone) on control node.",
+ "scenario": "nfvi",
+ "subTestCase": null,
+ "testCaseName": "yardstick.ha.keystone"
+ },
+ {
+ "description": "testing for vping using userdata",
+ "scenario": "nfvi",
+ "subTestCase": null,
+ "testCaseName": "functest.vping.userdata"
+ },
+ {
+ "description": "tempest smoke test cases about volume",
+ "scenario": "nfvi",
+ "subTestCase": [
+ "tempest.api.volume.test_volumes_actions.VolumesActionsTest.test_attach_detach_volume_to_instance[compute,id-fff42874-7db5-4487-a8e1-ddda5fb5288d,smoke]",
+ "tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern.test_volume_boot_pattern[compute,id-557cd2c2-4eb8-4dce-98be-f86765ff311b,image,slow,volume]"
+ ],
+ "testCaseName": "functest.tempest.volume"
+ }
+ ]
+ }
+
+
+Running Test Cases
+==================
+
+ * This is a **POST** function with some parameters to run a subset of the whole test cases.
+
+ * The request URL is ``http://<host_ip>:<api_port>/api/v1/scenario/nfvi/execution``.
+
+ * The request body is structured as following. The ``conf`` section is used to
+ give all configuration items those are required to run test cases. They are
+ the same as all configuration files provided under ``$DOVETAIL_HOME/pre_config/``.
+ If you already have these files under this directory, the whole ``conf`` section
+ can be ignored. If you provide these configuration items with the request body,
+ then the corresponding files under ``$DOVETAIL_HOME/pre_config/`` will be ignored
+ by Dovetail. The ``testcase``, ``testsuite``, ``testarea`` and ``deploy_scenario``
+ correspond to ``--testcase``, ``--testsuite``, ``--testarea`` and ``--deploy-scenario``
+ defined with Dovetail CLI. The ``options`` section support to set all options
+ which have already been implemented by Dovetail CLI including ``--optional``,
+ ``--mandatory``, ``--no-clean``, ``--no-api-validation``, ``--offline``,
+ ``--report``, ``--stop`` and ``--debug``. For options list in ``options`` section,
+ they are set to be ``True``, otherwise, they are set to be ``False``.
+
+ .. code-block:: bash
+
+ {
+ "conf": {
+ "vm_images": "/home/ovp/images",
+ "pods": {
+ "nodes": [
+ {
+ "name": "node1",
+ "role": "Controller",
+ "ip": "192.168.117.222",
+ "user": "root",
+ "password": "root",
+ }
+ ],
+ "process_info": [
+ {
+ "testcase_name": "yardstick.ha.rabbitmq",
+ "attack_host": "node1",
+ "attack_process": "rabbitmq"
+ }
+ ]
+ },
+ "tempest_conf": {
+ "compute": {
+ "min_compute_nodes": "2",
+ "volume_device_name": "vdb",
+ "max_microversion": "2.65"
+ }
+ },
+ "hosts": {
+ "192.168.141.101": [
+ "volume.os.com",
+ "compute.os.com"
+ ]
+ },
+ "envs": {
+ "OS_USERNAME": "admin",
+ "OS_PASSWORD": "admin",
+ "OS_AUTH_URL": "https://192.168.117.222:5000/v3",
+ "EXTERNAL_NETWORK": "ext-net"
+ }
+ },
+ "testcase": [
+ "functest.vping.ssh",
+ "yardstick.ha.rabbitmq"
+ ],
+ "testsuite": "ovp.2019.12",
+ "testarea": [
+ "vping",
+ "ha"
+ ],
+ "deploy_scenario": "os-nosdn-ovs-ha",
+ "options": [
+ "debug",
+ "report"
+ ]
+ }
+
+
+ * The response body is structured as:
+
+ .. code-block:: bash
+
+ {
+ "result": [
+ {
+ "endTime": null,
+ "executionId": "a65e24c0-1803-11ea-84f4-0242ac110004",
+ "results": null,
+ "scenario": "nfvi",
+ "status": "IN_PROGRESS",
+ "testCaseName": "functest.vping.ssh",
+ "testSuiteName": "ovp.2019.12",
+ "timestart": null
+ }
+ ]
+ }
+
+
+Running Test Cases with Execution ID
+====================================
+
+ * This is a **POST** function with some parameters to run a subset of
+ whole test cases and set the execution ID instead of using the random one.
+
+ * The request URL is ``http://<host_ip>:<api_port>/api/v1/scenario/nfvi/execution/{exec_id}``.
+
+ * It's almost the same as the above running test cases API except the execution ID.
+
+
+Getting Status of Test Cases
+============================
+
+ * This is a **POST** function to get the status of some test cases by using
+ the execution ID received in the response body of `Running Test Cases`_ or
+ `Running Test Cases with Execution ID`_ APIs.
+
+ * The request URL is ``http://<host_ip>:<api_port>/api/v1/scenario/nfvi/execution/status/{exec_id}``.
+
+ * The request body is structured as:
+
+ .. code-block:: bash
+
+ {
+ "testcase": [
+ "functest.vping.ssh"
+ ]
+ }
+
+ * The response body is structured as:
+
+ .. code-block:: bash
+
+ {
+ "result": [
+ {
+ "endTime": "2019-12-06 08:39:23",
+ "executionId": "a65e24c0-1803-11ea-84f4-0242ac110004",
+ "results": {
+ "criteria": "PASS",
+ "sub_testcase": [],
+ "timestart": "2019-12-06 08:38:40",
+ "timestop":"2019-12-06 08:39:23"
+ },
+ "scenario": "nfvi",
+ "status": "COMPLETED",
+ "testCaseName": "functest.vping.ssh",
+ "testSuiteName": "ovp.2019.12",
+ "timestart":"2019-12-06 08:38:40"
+ }
+ ]
+ }
+
+
+
+
+Getting Test Results
+^^^^^^^^^^^^^^^^^^^^
+
+Each time you call the running test case API, Dovetail creates a directory with the
+execution ID as the name under ``$DOVETAIL_HOME`` to store results on the host.
+You can find all result files under ``$DOVETAIL_HOME/<executionId>/results``.
+If you run test cases with ``report`` option, then there will be a tarball file
+under ``$DOVETAIL_HOME/<executionId>`` which can be upload to OVP portal.
diff --git a/docs/testing/user/userguide/cli_reference.rst b/docs/testing/user/userguide/cli_reference.rst
index 97eccffc..7dd5c8e4 100644
--- a/docs/testing/user/userguide/cli_reference.rst
+++ b/docs/testing/user/userguide/cli_reference.rst
@@ -135,7 +135,7 @@ Dovetail List Commands
.. code-block:: bash
- root@1f230e719e44:~/dovetail/dovetail# dovetail list ovp.2018.09
+ root@1f230e719e44:~/dovetail/dovetail# dovetail list ovp.2019.12
- mandatory
functest.vping.userdata
functest.vping.ssh
@@ -166,15 +166,11 @@ Dovetail List Commands
functest.tempest.vm_lifecycle
functest.tempest.network_scenario
functest.tempest.bgpvpn
- functest.bgpvpn.subnet_connectivity
- functest.bgpvpn.tenant_separation
- functest.bgpvpn.router_association
- functest.bgpvpn.router_association_floating_ip
+ functest.security.patrole_vxlan_dependent
yardstick.ha.neutron_l3_agent
yardstick.ha.controller_restart
functest.vnf.vims
functest.vnf.vepc
- functest.snaps.smoke
Dovetail Show Commands
----------------------
@@ -199,12 +195,15 @@ Dovetail Show Commands
validate:
type: functest
testcase: vping_ssh
+ image_name: opnfv/functest-healthcheck
report:
source_archive_files:
- functest.log
dest_archive_files:
- vping_logs/functest.vping.ssh.log
- check_results_file: 'functest_results.txt'
+ check_results_file:
+ - 'functest_results.txt'
+ portal_key_file: vping_logs/functest.vping.ssh.log
sub_testcase_list:
.. code-block:: bash
@@ -219,20 +218,20 @@ Dovetail Show Commands
testcase: tempest_custom
pre_condition:
- 'cp /home/opnfv/userconfig/pre_config/tempest_conf.yaml /usr/lib/python2.7/site-packages/functest/opnfv_tests/openstack/tempest/custom_tests/tempest_conf.yaml'
- - 'cp /home/opnfv/userconfig/pre_config/testcases.yaml /usr/lib/python2.7/site-packages/xtesting/ci/testcases.yaml'
- pre_copy:
- src_file: tempest_custom.txt
- dest_path: /usr/lib/python2.7/site-packages/functest/opnfv_tests/openstack/tempest/custom_tests/test_list.txt
+ - 'cp /home/opnfv/userconfig/tempest_custom_testcases.yaml /usr/lib/python2.7/site-packages/xtesting/ci/testcases.yaml'
+ - 'cp /home/opnfv/functest/results/tempest_custom.txt /usr/lib/python2.7/site-packages/functest/opnfv_tests/openstack/tempest/custom_tests/test_list.txt'
report:
source_archive_files:
- functest.log
- - tempest_custom/tempest.log
+ - tempest_custom/rally.log
- tempest_custom/tempest-report.html
dest_archive_files:
- tempest_logs/functest.tempest.image.functest.log
- tempest_logs/functest.tempest.image.log
- tempest_logs/functest.tempest.image.html
- check_results_file: 'functest_results.txt'
+ check_results_file:
+ - 'functest_results.txt'
+ portal_key_file: tempest_logs/functest.tempest.image.html
sub_testcase_list:
- tempest.api.image.v2.test_images.BasicOperationsImagesTest.test_register_upload_get_image_file[id-139b765e-7f3d-4b3d-8b37-3ca3876ee318,smoke]
- tempest.api.image.v2.test_versions.VersionsTest.test_list_versions[id-659ea30a-a17c-4317-832c-0f68ed23c31d,smoke]
@@ -248,41 +247,43 @@ Dovetail Run Commands
Dovetail compliance test entry!
Options:
- --deploy-scenario TEXT Specify the DEPLOY_SCENARIO which will be used as input by each testcase respectively
+ --opnfv-ci Only enabled when running with OPNFV CI jobs and pushing results to TestAPI DB
--optional Run all optional test cases.
+ --mandatory Run all mandatory test cases.
+ --deploy-scenario TEXT Specify the DEPLOY_SCENARIO which will be used as input by each testcase respectively
+ -n, --no-clean Keep all Containers created for debuging.
+ --no-api-validation disable strict API response validation
--offline run in offline method, which means not to update the docker upstream images, functest, yardstick, etc.
-r, --report Create a tarball file to upload to OVP web portal
+ -s, --stop Flag for stopping on test case failure.
-d, --debug Flag for showing debug log on screen.
--testcase TEXT Compliance testcase. Specify option multiple times to include multiple test cases.
--testarea TEXT Compliance testarea within testsuite. Specify option multiple times to include multiple test areas.
- -s, --stop Flag for stopping on test case failure.
- -n, --no-clean Keep all Containers created for debuging.
- --no-api-validation disable strict API response validation
- --mandatory Run all mandatory test cases.
--testsuite TEXT compliance testsuite.
-h, --help Show this message and exit.
.. code-block:: bash
root@1f230e719e44:~/dovetail/dovetail# dovetail run --testcase functest.vping.ssh --offline -r --deploy-scenario os-nosdn-ovs-ha
- 2017-10-12 14:57:51,278 - run - INFO - ================================================
- 2017-10-12 14:57:51,278 - run - INFO - Dovetail compliance: ovp.2018.09!
- 2017-10-12 14:57:51,278 - run - INFO - ================================================
- 2017-10-12 14:57:51,278 - run - INFO - Build tag: daily-master-b80bca76-af5d-11e7-879a-0242ac110002
- 2017-10-12 14:57:51,278 - run - INFO - DEPLOY_SCENARIO : os-nosdn-ovs-ha
- 2017-10-12 14:57:51,336 - run - WARNING - There is no hosts file /home/dovetail/pre_config/hosts.yaml, may be some issues with domain name resolution.
- 2017-10-12 14:57:51,336 - run - INFO - Get hardware info of all nodes list in file /home/cvp/pre_config/pod.yaml ...
- 2017-10-12 14:57:51,336 - run - INFO - Hardware info of all nodes are stored in file /home/cvp/results/all_hosts_info.json.
- 2017-10-12 14:57:51,517 - run - INFO - >>[testcase]: functest.vping.ssh
- 2017-10-12 14:58:21,325 - report.Report - INFO - Results have been stored with file /home/cvp/results/functest_results.txt.
- 2017-10-12 14:58:21,325 - report.Report - INFO -
+ 2019-12-06 02:51:52,634 - run - INFO - ================================================
+ 2019-12-06 02:51:52,634 - run - INFO - Dovetail compliance: ovp.2019.12!
+ 2019-12-06 02:51:52,634 - run - INFO - ================================================
+ 2019-12-06 02:51:52,634 - run - INFO - Build tag: daily-master-5b58584a-17d3-11ea-878a-0242ac110002
+ 2019-12-06 02:51:52,634 - run - INFO - DEPLOY_SCENARIO : os-nosdn-ovs-ha
+ 2019-12-06 02:51:53,077 - run - INFO - >>[testcase]: functest.vping.ssh
+ 2019-12-06 02:51:53,078 - dovetail.test_runner.DockerRunner - WARNING - There is no hosts file /home/ovp/pre_config/hosts.yaml. This may cause some issues with domain name resolution.
+ 2019-12-06 02:51:54,048 - dovetail.test_runner.DockerRunner - INFO - Get hardware info of all nodes list in file /home/ovp/pre_config/pod.yaml ...
+ 2019-12-06 02:51:54,049 - dovetail.test_runner.DockerRunner - INFO - Hardware info of all nodes are stored in file /home/dovetail/results/all_hosts_info.json.
+ 2019-12-06 02:51:54,073 - dovetail.container.Container - WARNING - There is no hosts file /home/ovp/pre_config/hosts.yaml. This may cause some issues with domain name resolution.
+ 2019-12-06 02:52:57,982 - dovetail.report.Report - INFO - Results have been stored with files: ['/home/ovp/results/functest_results.txt'].
+ 2019-12-06 02:52:57,986 - dovetail.report.Report - INFO -
Dovetail Report
- Version: 2018.09
- Build Tag: daily-master-b80bca76-af5d-11e7-879a-0242ac110002
- Test Date: 2018-08-13 03:23:56 UTC
- Duration: 291.92 s
+ Version: 2019.12
+ Build Tag: daily-master-5b58584a-17d3-11ea-878a-0242ac110002
+ Test Date: 2019-12-06 02:52:57 UTC
+ Duration: 64.91 s
- Pass Rate: 0.00% (1/1)
- vping: pass rate 100%
+ Pass Rate: 100.00% (1/1)
+ vping: pass rate 100.00%
-functest.vping.ssh PASS
diff --git a/docs/testing/user/userguide/images/tocsa_vnf_test_environment.png b/docs/testing/user/userguide/images/tocsa_vnf_test_environment.png
new file mode 100644
index 00000000..78b3f74a
--- /dev/null
+++ b/docs/testing/user/userguide/images/tocsa_vnf_test_environment.png
Binary files differ
diff --git a/docs/testing/user/userguide/images/tosca_vnf_test_flow.png b/docs/testing/user/userguide/images/tosca_vnf_test_flow.png
new file mode 100644
index 00000000..87dc8ec4
--- /dev/null
+++ b/docs/testing/user/userguide/images/tosca_vnf_test_flow.png
Binary files differ
diff --git a/docs/testing/user/userguide/index.rst b/docs/testing/user/userguide/index.rst
index 355817df..98ca56e0 100644
--- a/docs/testing/user/userguide/index.rst
+++ b/docs/testing/user/userguide/index.rst
@@ -13,3 +13,5 @@ OVP Testing User Guide
testing_guide.rst
cli_reference.rst
+ api_testing_guide.rst
+ vnf_test_guide.rst
diff --git a/docs/testing/user/userguide/testing_guide.rst b/docs/testing/user/userguide/testing_guide.rst
index fa8ad1a6..d1c31683 100644
--- a/docs/testing/user/userguide/testing_guide.rst
+++ b/docs/testing/user/userguide/testing_guide.rst
@@ -2,16 +2,19 @@
.. http://creativecommons.org/licenses/by/4.0
.. (c) OPNFV, Huawei Technologies Co.,Ltd and others.
-==========================================
-Conducting OVP Testing with Dovetail
-==========================================
+=========================================
+Conducting OVP NFVI Testing with Dovetail
+=========================================
Overview
-------------------------------
+--------
-The Dovetail testing framework for OVP consists of two major parts: the testing client that
+This guide provides the instructions for the OVP Infrastructure testing. For the
+OVP VNF testing, please refer to the next section.
+
+The Dovetail testing framework for OVP consists of two major parts: the testing client which
executes all test cases in a lab (vendor self-testing or a third party lab),
-and the server system that is hosted by the OVP administrator to store and
+and the server system which is hosted by the OVP administrator to store and
view test results based on a web API. The following diagram illustrates
this overall framework.
@@ -25,10 +28,10 @@ the System Under Test (SUT) itself.
The above diagram assumes that the tester's Test Host is situated in a DMZ, which
has internal network access to the SUT and external access via the public Internet.
The public Internet connection allows for easy installation of the Dovetail containers.
-A singular compressed file that includes all the underlying results can be pulled from
+A single compressed file that includes all the underlying results can be pulled from
the Test Host and uploaded to the OPNFV OVP server.
This arrangement may not be supported in some labs. Dovetail also supports an offline mode of
-installation that is illustrated in the next diagram.
+installation which is illustrated in the next diagram.
.. image:: ../../../images/dovetail_offline_mode.png
:align: center
@@ -44,8 +47,7 @@ The rest of this guide will describe how to install the Dovetail tool as a
Docker container image, go over the steps of running the OVP test suite, and
then discuss how to view test results and make sense of them.
-Readers interested
-in using Dovetail for its functionalities beyond OVP testing, e.g. for in-house
+Readers interested in using Dovetail for its functionalities beyond OVP testing, e.g. for in-house
or extended testing, should consult the Dovetail developer's guide for additional
information.
@@ -136,16 +138,13 @@ also work, but community support may be more available on Docker 17.03 CE or gre
If your Test Host does not have Docker installed, or Docker is older than 1.12.3,
or you have Docker version other than 17.03 CE and wish to change,
you will need to install, upgrade, or re-install in order to run Dovetail.
-The Docker installation process
-can be more complex, you should refer to the official
+If you need further assistance with Docker installation process, you should refer to the official
Docker installation guide that is relevant to your Test Host's operating system.
-The above installation steps assume that the Test Host is in the online mode. For offline
-testing, use the following offline installation steps instead.
-
-In order to install Docker offline, download Docker static binaries and copy the
-tar file to the Test Host, such as for Ubuntu14.04, you may follow the following link
-to install,
+The above installation steps assume that the Test Host is in the online mode.
+For offline testing, use the following offline installation steps instead.
+For instance, download Docker static binaries and copy the tar file to the
+Test Host, such as for Ubuntu14.04, you may follow the following link:
.. code-block:: bash
@@ -155,7 +154,7 @@ Configuring the Test Host Environment
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The Test Host needs a few environment variables set correctly in order to access the
-Openstack API required to drive the Dovetail tests. For convenience and as a convention,
+OpenStack API which is required to drive the Dovetail tests. For convenience and as a convention,
we will also create a home directory for storing all Dovetail related config files and
results files:
@@ -164,8 +163,8 @@ results files:
$ mkdir -p ${HOME}/dovetail
$ export DOVETAIL_HOME=${HOME}/dovetail
-Here we set dovetail home directory to be ``${HOME}/dovetail`` for an example.
-Then create 2 directories named ``pre_config`` and ``images`` in this directory
+For example, here we set dovetail home directory to be ``${HOME}/dovetail``.
+Then create two directories named ``pre_config`` and ``images`` under this directory
to store all Dovetail related config files and all test images respectively:
.. code-block:: bash
@@ -173,14 +172,15 @@ to store all Dovetail related config files and all test images respectively:
$ mkdir -p ${DOVETAIL_HOME}/pre_config
$ mkdir -p ${DOVETAIL_HOME}/images
+
Setting up Primary Configuration File
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-At this point, you will need to consult your SUT (Openstack) administrator to correctly set
+At this point, you will need to consult your SUT (OpenStack) administrator to correctly set
the configurations in a file named ``env_config.sh``.
-The Openstack settings need to be configured such that the Dovetail client has all the necessary
+The OpenStack settings need to be configured such that the Dovetail client has all the necessary
credentials and privileges to execute all test operations. If the SUT uses terms
-somewhat differently from the standard Openstack naming, you will need to adjust
+somewhat differently from the standard OpenStack naming, you will need to adjust
this file accordingly.
Create and edit the file ``${DOVETAIL_HOME}/pre_config/env_config.sh`` so that
@@ -191,17 +191,17 @@ this file should contain.
$ cat ${DOVETAIL_HOME}/pre_config/env_config.sh
- # Project-level authentication scope (name or ID), recommend admin project.
+ # Project-level authentication scope (name or ID), admin project is recommended.
export OS_PROJECT_NAME=admin
- # Authentication username, belongs to the project above, recommend admin user.
+ # Authentication username, belongs to the project above, admin user is recommended.
export OS_USERNAME=admin
# Authentication password. Use your own password
export OS_PASSWORD=xxxxxxxx
# Authentication URL, one of the endpoints of keystone service. If this is v3 version,
- # there need some extra variables as follows.
+ # there needs some extra variables as follows.
export OS_AUTH_URL='http://xxx.xxx.xxx.xxx:5000/v3'
# Default is 2.0. If use keystone v3 API, this should be set as 3.
@@ -234,9 +234,16 @@ this file should contain.
# Otherwise, it will create a role 'Member' to do that.
export NEW_USER_ROLE=xxx
+ # For XCI installer the following environment parameters should be added in
+ # this file. Otherwise, those parameters could be ignored.
+ export INSTALLER_TYPE=osa
+ export DEPLOY_SCENARIO=os-nosdn-nofeature
+ export XCI_FLAVOR=noha
+
+
The OS_AUTH_URL variable is key to configure correctly, as the other admin services
-are gleaned from the identity service. HTTPS should be configured in the SUT so
+are collected from the identity service. HTTPS should be configured in the SUT so
either OS_CACERT or OS_INSECURE should be uncommented.
However, if SSL is disabled in the SUT, comment out both OS_CACERT and OS_INSECURE variables.
Ensure the '/path/to/pre_config' directory in
@@ -271,7 +278,12 @@ Here is an example of what this file should contain.
# Expected device name when a volume is attached to an instance.
volume_device_name: vdb
-Use the listing above as a minimum to execute the mandatory test areas.
+ # One sub test case of functest.tempest.osinterop will be skipped if not provide this version.
+ # The default range of microversion for tempest is [None - None].
+ # Test case functest.tempest.osinterop required the range to be [2.2 - latest].
+ max_microversion: 2.65
+
+Use the listing above as a minimum to execute the mandatory test cases.
If the optional BGPVPN Tempest API tests shall be run, Tempest needs to be told
that the BGPVPN service is available. To do that, add the following to the
@@ -291,9 +303,9 @@ name, role, ip, as well as the user and key_filename or password to login to the
must create the file ``${DOVETAIL_HOME}/pre_config/pod.yaml`` to store the info.
For some HA test cases, they will log in the controller node 'node1' and kill the specific processes.
The names of the specific processes may be different with the actual ones of the SUTs.
-The process names can also be changed with file ``${DOVETAIL_HOME}/pre_config/pod.yaml``.
+The processes' names can also be changed with file ``${DOVETAIL_HOME}/pre_config/pod.yaml``.
-This file is also used as basis to collect SUT hardware information that is stored alongside results and
+This file is also used as a basis to collect SUT hardware information which is stored alongside results and
uploaded to the OVP web portal. The SUT hardware information can be viewed within the
'My Results' view in the OVP web portal by clicking the SUT column 'info' link. In order to
collect SUT hardware information holistically, ensure this file has an entry for each of
@@ -305,32 +317,37 @@ Below is a sample with the required syntax when password is employed by the cont
nodes:
-
- # This can not be changed and must be node0.
+ # This info of node0 is used only for one optional test case 'yardstick.ha.controller_restart'.
+ # If you don't plan to test it, this Jumpserver node can be ignored.
+ # This can not be changed and **must** be node0.
name: node0
- # This must be Jumpserver.
+ # This **must** be Jumpserver.
role: Jumpserver
- # This is the install IP of a node which has ipmitool installed.
+ # This is the instance IP of a node which has ipmitool installed.
ip: xx.xx.xx.xx
- # User name of this node. This user must have sudo privileges.
+ # User name of the user of this node. This user **must** have sudo privileges.
user: root
# Password of the user.
password: root
-
- # This can not be changed and must be node1.
+ # Almost all HA test cases are trying to login to a controller node named 'node1'
+ # and then kill some processes running on it.
+ # If you don't want to reset the attack node name for each test case, this
+ # name can not be changed and **must** be node1.
name: node1
- # This must be controller.
+ # This **must** be controller.
role: Controller
- # This is the install IP of a controller node, which is the haproxy primary node
+ # This is the instance IP of a controller node, which is the haproxy primary node
ip: xx.xx.xx.xx
- # User name of this node. This user must have sudo privileges.
+ # User name of the user of this node. This user **must** have sudo privileges.
user: root
# Password of the user.
@@ -338,14 +355,21 @@ Below is a sample with the required syntax when password is employed by the cont
process_info:
-
+ # For all HA test cases, there are 2 parameters, 'attack_process' and 'attack_host',
+ # which support to be set by users instead of using the default values.
+ # The 'attack_process' is the process name of one HA test case which it try to kill.
+ # The 'attack_host' is the host name which the test case try to login and then kill
+ # the process running on it.
+ # Fllowing is 2 samples.
+
# The default attack process of yardstick.ha.rabbitmq is 'rabbitmq-server'.
- # Here can reset it to be 'rabbitmq'.
+ # Here can be reset to 'rabbitmq'.
testcase_name: yardstick.ha.rabbitmq
attack_process: rabbitmq
-
# The default attack host for all HA test cases is 'node1'.
- # Here can reset it to be any other node given in the section 'nodes'.
+ # Here can be reset to any other node given in the section 'nodes'.
testcase_name: yardstick.ha.glance_api
attack_host: node2
@@ -363,10 +387,9 @@ A sample is provided below to show the required syntax when using a key file.
user: root
# Private ssh key for accessing the controller nodes. If a keyfile is
- # being used, the path specified **must** be as shown below as this
- # is the location of the user-provided private ssh key inside the
- # Yardstick container.
- key_filename: /home/opnfv/userconfig/pre_config/id_rsa
+ # being used instead of password, it **must** be put under
+ # $DOVETAIL_HOME/pre_config/ and named 'id_rsa'.
+ key_filename: /home/dovetail/pre_config/id_rsa
Under nodes, repeat entries for name, role, ip, user and password or key file for each of the
controller/compute nodes that comprise the SUT. Use a '-' to separate each of the entries.
@@ -375,7 +398,7 @@ Specify the value for the role key to be either 'Controller' or 'Compute' for ea
Under process_info, repeat entries for testcase_name, attack_host and attack_process
for each HA test case. Use a '-' to separate each of the entries.
The default attack host of all HA test cases is **node1**.
-The default attack processes of all HA test cases are list here,
+The default attack processes of all HA test cases are list here:
+-------------------------------+-------------------------+
| Test Case Name | Attack Process Name |
@@ -407,7 +430,7 @@ If your SUT uses a hosts file to translate hostnames into the IP of OS_AUTH_URL,
to provide the hosts info in a file ``$DOVETAIL_HOME/pre_config/hosts.yaml``.
Create and edit file ``$DOVETAIL_HOME/pre_config/hosts.yaml``. Below is an example of what
-this file should contain. Note, that multiple hostnames can be specified for each IP address,
+this file should contain. Note that multiple hostnames can be specified for each IP address,
as shown in the generic syntax below the example.
.. code-block:: bash
@@ -435,20 +458,26 @@ OPNFV's OVP web page first to determine the right tag for OVP testing.
Online Test Host
""""""""""""""""
-If the Test Host is online, you can directly pull Dovetail Docker image and download Ubuntu
-and Cirros images. All other dependent docker images will automatically be downloaded. The
-Ubuntu and Cirros images are used by Dovetail for image creation and VM instantiation within
-the SUT.
+If the Test Host is online, you can directly pull Dovetail Docker image, then all
+other dependent docker images will automatically be downloaded. Also you can download
+other related VM images such as Ubuntu and Cirros images which are used by Dovetail
+for image creation and VM instantiation within the SUT.
+
+Following given the download url for each VM images. Cirros-0.4.0 and Ubuntu-16.04
+are used by mandatory test cases, so they are the only 2 images **must** be downloaded
+before doing the test. There are also 2 other optional VM images, Ubuntu-14.04 and
+Cloudify-manager, which are used by optional test cases functest.vnf.vepc and functest.vnf.vims.
+If you don't plan to test these 2 test cases, you can skip downloading these 2 images.
.. code-block:: bash
$ wget -nc http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img -P ${DOVETAIL_HOME}/images
- $ wget -nc https://cloud-images.ubuntu.com/releases/14.04/release/ubuntu-14.04-server-cloudimg-amd64-disk1.img -P ${DOVETAIL_HOME}/images
$ wget -nc https://cloud-images.ubuntu.com/releases/16.04/release/ubuntu-16.04-server-cloudimg-amd64-disk1.img -P ${DOVETAIL_HOME}/images
- $ wget -nc http://repository.cloudifysource.org/cloudify/4.0.1/sp-release/cloudify-manager-premium-4.0.1.qcow2 -P ${DOVETAIL_HOME}/images
+ $ wget -nc https://cloud-images.ubuntu.com/releases/14.04/release/ubuntu-14.04-server-cloudimg-amd64-disk1.img -P ${DOVETAIL_HOME}/images
+ $ wget -nc http://repository.cloudifysource.org/cloudify/19.01.24/community-release/cloudify-docker-manager-community-19.01.24.tar -P ${DOVETAIL_HOME}/images
- $ sudo docker pull opnfv/dovetail:latest
- latest: Pulling from opnfv/dovetail
+ $ sudo docker pull opnfv/dovetail:ovp-3.0.0
+ ovp-3.0.0: Pulling from opnfv/dovetail
324d088ce065: Pull complete
2ab951b6c615: Pull complete
9b01635313e2: Pull complete
@@ -460,51 +489,48 @@ the SUT.
0ad9f4168266: Pull complete
d949894f87f6: Pull complete
Digest: sha256:7449601108ebc5c40f76a5cd9065ca5e18053be643a0eeac778f537719336c29
- Status: Downloaded newer image for opnfv/dovetail:latest
-
-An example of the <tag> is **latest**.
+ Status: Downloaded newer image for opnfv/dovetail:ovp-3.0.0
Offline Test Host
"""""""""""""""""
-If the Test Host is offline, you will need to first pull the Dovetail Docker image, and all the
+If the Test Host is offline, you will need to first pull the Dovetail Docker image and all the
dependent images that Dovetail uses, to a host that is online. The reason that you need
to pull all dependent images is because Dovetail normally does dependency checking at run-time
and automatically pulls images as needed, if the Test Host is online. If the Test Host is
offline, then all these dependencies will need to be manually copied.
-The Docker images and Cirros image below are necessary for all mandatory test cases.
+The Docker images, Ubuntu and Cirros image below are necessary for all mandatory test cases.
.. code-block:: bash
- $ sudo docker pull opnfv/dovetail:latest
- $ sudo docker pull opnfv/functest-smoke:fraser
- $ sudo docker pull opnfv/yardstick:stable
- $ sudo docker pull opnfv/bottlenecks:stable
+ $ sudo docker pull opnfv/dovetail:ovp-3.0.0
+ $ sudo docker pull opnfv/functest-smoke:hunter
+ $ sudo docker pull opnfv/functest-healthcheck:hunter
+ $ sudo docker pull opnfv/yardstick:opnfv-8.0.0
+ $ sudo docker pull opnfv/bottlenecks:8.0.1-latest
$ wget -nc http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img -P {ANY_DIR}
+ $ wget -nc https://cloud-images.ubuntu.com/releases/16.04/release/ubuntu-16.04-server-cloudimg-amd64-disk1.img -P ${DOVETAIL_HOME}/images
The other Docker images and test images below are only used by optional test cases.
.. code-block:: bash
- $ sudo docker pull opnfv/functest-healthcheck:fraser
- $ sudo docker pull opnfv/functest-features:fraser
- $ sudo docker pull opnfv/functest-vnf:fraser
+ $ sudo docker pull opnfv/functest-vnf:hunter
$ wget -nc https://cloud-images.ubuntu.com/releases/14.04/release/ubuntu-14.04-server-cloudimg-amd64-disk1.img -P {ANY_DIR}
- $ wget -nc https://cloud-images.ubuntu.com/releases/16.04/release/ubuntu-16.04-server-cloudimg-amd64-disk1.img -P {ANY_DIR}
- $ wget -nc http://repository.cloudifysource.org/cloudify/4.0.1/sp-release/cloudify-manager-premium-4.0.1.qcow2 -P {ANY_DIR}
+ $ wget -nc http://repository.cloudifysource.org/cloudify/19.01.24/community-release/cloudify-docker-manager-community-19.01.24.tar -P ${DOVETAIL_HOME}/images
-Once all these images are pulled, save the images, copy to the Test Host, and then load
+Once all these images are pulled, save the images, copy them to the Test Host, and then load
the Dovetail image and all dependent images at the Test Host.
At the online host, save the images with the command below.
.. code-block:: bash
- $ sudo docker save -o dovetail.tar opnfv/dovetail:latest \
- opnfv/functest-smoke:fraser opnfv/functest-healthcheck:fraser \
- opnfv/functest-features:fraser opnfv/functest-vnf:fraser \
- opnfv/yardstick:stable opnfv/bottlenecks:stable
+ $ sudo docker save -o dovetail.tar opnfv/dovetail:ovp-3.0.0 \
+ opnfv/functest-smoke:hunter opnfv/functest-healthcheck:hunter \
+ opnfv/functest-vnf:hunter \
+ opnfv/yardstick:opnfv-8.0.0 opnfv/bottlenecks:8.0.1-latest
The command above creates a dovetail.tar file with all the images, which can then be copied
to the Test Host. To load the Dovetail images on the Test Host execute the command below.
@@ -518,14 +544,13 @@ Now check to see that all Docker images have been pulled or loaded properly.
.. code-block:: bash
$ sudo docker images
- REPOSITORY TAG IMAGE ID CREATED SIZE
- opnfv/dovetail latest ac3b2d12b1b0 24 hours ago 784 MB
- opnfv/functest-smoke fraser 010aacb7c1ee 17 hours ago 594.2 MB
- opnfv/functest-healthcheck fraser 2cfd4523f797 17 hours ago 234 MB
- opnfv/functest-features fraser b61d4abd56fd 17 hours ago 530.5 MB
- opnfv/functest-vnf fraser 929e847a22c3 17 hours ago 1.87 GB
- opnfv/yardstick stable 84b4edebfc44 17 hours ago 2.052 GB
- opnfv/bottlenecks stable 3d4ed98a6c9a 21 hours ago 638 MB
+ REPOSITORY TAG IMAGE ID CREATED SIZE
+ opnfv/dovetail ovp-3.0.0 4b68659da24d 22 hours ago 825MB
+ opnfv/functest-smoke hunter c0253f6de153 3 weeks ago 556MB
+ opnfv/functest-healthcheck hunter fb6d766e38e0 3 weeks ago 379MB
+ opnfv/functest-vnf hunter 31466d52d155 21 hours ago 1.1GB
+ opnfv/yardstick opnfv-8.0.0 189d7d9fbcb2 7 months ago 2.54GB
+ opnfv/bottlenecks 8.0.1-latest 44c1b9fb25aa 5 hours ago 837MB
After copying and loading the Dovetail images at the Test Host, also copy the test images
(Ubuntu, Cirros and cloudify-manager) to the Test Host.
@@ -533,12 +558,12 @@ After copying and loading the Dovetail images at the Test Host, also copy the te
- Copy image ``cirros-0.4.0-x86_64-disk.img`` to ``${DOVETAIL_HOME}/images/``.
- Copy image ``ubuntu-14.04-server-cloudimg-amd64-disk1.img`` to ``${DOVETAIL_HOME}/images/``.
- Copy image ``ubuntu-16.04-server-cloudimg-amd64-disk1.img`` to ``${DOVETAIL_HOME}/images/``.
-- Copy image ``cloudify-manager-premium-4.0.1.qcow2`` to ``${DOVETAIL_HOME}/images/``.
+- Copy image ``cloudify-docker-manager-community-19.01.24.tar`` to ``${DOVETAIL_HOME}/images/``.
Starting Dovetail Docker
------------------------
-Regardless of whether you pulled down the Dovetail image directly online, or loaded from
+Regardless of whether you pulled down the Dovetail image directly online, or loaded it from
a static image tar file, you are now ready to run Dovetail. Use the command below to
create a Dovetail container and get access to its shell.
@@ -550,11 +575,12 @@ create a Dovetail container and get access to its shell.
-v /var/run/docker.sock:/var/run/docker.sock \
opnfv/dovetail:<tag> /bin/bash
-The ``-e`` option sets the DOVETAIL_HOME environment variable in the container and the
-``-v`` options map files in the Test Host to files in the container. The latter option
-allows the Dovetail container to read the configuration files and write result files into
-DOVETAIL_HOME on the Test Host. The user should be within the Dovetail container shell,
-once the command above is executed.
+The ``-e`` option sets the DOVETAIL_HOME environment variable in the container
+and the ``-v`` options mounts files from the test host to the destination path
+inside the container. The latter option allows the Dovetail container to read
+the configuration files and write result files into DOVETAIL_HOME on the Test
+Host. The user should be within the Dovetail container shell, once the command
+above is executed.
Running the OVP Test Suite
@@ -569,10 +595,10 @@ for the details of the CLI.
$ dovetail run --testsuite <test-suite-name>
-The '--testsuite' option is used to control the set of tests intended for execution
+The ``--testsuite`` option is used to control the set of tests intended for execution
at a high level. For the purposes of running the OVP test suite, the test suite name follows
-the following format, ``ovp.<major>.<minor>.<patch>``. The latest and default test suite is
-ovp.2018.09.
+the following format, ``ovp.<release-version>``. The latest and default test suite is
+ovp.2019.12.
.. code-block:: bash
@@ -582,18 +608,18 @@ This command is equal to
.. code-block:: bash
- $ dovetail run --testsuite ovp.2018.09
+ $ dovetail run --testsuite ovp.2019.12
Without any additional options, the above command will attempt to execute all mandatory and
-optional test cases with test suite ovp.2018.09.
+optional test cases with test suite ovp.2019.12.
To restrict the breadth of the test scope, it can also be specified using options
-'--mandatory' or '--optional'.
+``--mandatory`` or ``--optional``.
.. code-block:: bash
$ dovetail run --mandatory
-Also there is a '--testcase' option provided to run a specified test case.
+Also there is a ``--testcase`` option provided to run a specified test case.
.. code-block:: bash
@@ -619,14 +645,14 @@ This is happening if the name of the scenario contains the substring "ovs".
In this case, the flavor which is going to be used for the running test case has
'hugepage' characteristics.
-Taking the above into our consideration and having in our mind that the DEPLOY_SCENARIO
+Taking the above into consideration and having in mind that the DEPLOY_SCENARIO
environment parameter is not used by dovetail framework (the initial value is 'unknown'),
we set as input, for the features that they are responsible for the test case execution,
the DEPLOY_SCENARIO environment parameter having as substring the feature name "ovs"
(e.g. os-nosdn-ovs-ha).
Note for the users:
- - if their system uses DPDK, they should run with --deploy-scenario <xx-yy-ovs-zz>
+ - if their system uses DPDK, they should run with ``--deploy-scenario <xx-yy-ovs-zz>``
(e.g. os-nosdn-ovs-ha)
- this is an experimental feature
@@ -637,14 +663,14 @@ Note for the users:
By default, results are stored in local files on the Test Host at ``$DOVETAIL_HOME/results``.
Each time the 'dovetail run' command is executed, the results in the aforementioned directory
are overwritten. To create a singular compressed result file for upload to the OVP portal or
-for archival purposes, the tool provided an option '--report'.
+for archival purposes, the tool provides an option ``--report``.
.. code-block:: bash
$ dovetail run --report
If the Test Host is offline, ``--offline`` should be added to support running with
-local resources.
+local resources. Otherwise, it will try to download resources online during the run time.
.. code-block:: bash
@@ -656,22 +682,23 @@ result file on the Test Host.
.. code-block:: bash
$ dovetail run --offline --testcase functest.vping.userdata --report
- 2018-05-22 08:16:16,353 - run - INFO - ================================================
- 2018-05-22 08:16:16,353 - run - INFO - Dovetail compliance: ovp.2018.09!
- 2018-05-22 08:16:16,353 - run - INFO - ================================================
- 2018-05-22 08:16:16,353 - run - INFO - Build tag: daily-master-660de986-5d98-11e8-b635-0242ac110001
- 2018-05-22 08:19:31,595 - run - WARNING - There is no hosts file /home/dovetail/pre_config/hosts.yaml, may be some issues with domain name resolution.
- 2018-05-22 08:19:31,595 - run - INFO - Get hardware info of all nodes list in file /home/dovetail/pre_config/pod.yaml ...
- 2018-05-22 08:19:39,778 - run - INFO - Hardware info of all nodes are stored in file /home/dovetail/results/all_hosts_info.json.
- 2018-05-22 08:19:39,961 - run - INFO - >>[testcase]: functest.vping.userdata
- 2018-05-22 08:31:17,961 - run - INFO - Results have been stored with file /home/dovetail/results/functest_results.txt.
- 2018-05-22 08:31:17,969 - report.Report - INFO -
+ 2019-12-04 07:31:13,156 - run - INFO - ================================================
+ 2019-12-04 07:31:13,157 - run - INFO - Dovetail compliance: ovp.2019.12!
+ 2019-12-04 07:31:13,157 - run - INFO - ================================================
+ 2019-12-04 07:31:13,157 - run - INFO - Build tag: daily-master-0c9184e6-1668-11ea-b1cd-0242ac110002
+ 2019-12-04 07:31:13,610 - run - INFO - >>[testcase]: functest.vping.userdata
+ 2019-12-04 07:31:13,612 - dovetail.test_runner.DockerRunner - WARNING - There is no hosts file /home/ovp/pre_config/hosts.yaml. This may cause some issues with domain name resolution.
+ 2019-12-04 07:31:14,587 - dovetail.test_runner.DockerRunner - INFO - Get hardware info of all nodes list in file /home/ovp/pre_config/pod.yaml ...
+ 2019-12-04 07:31:14,587 - dovetail.test_runner.DockerRunner - INFO - Hardware info of all nodes are stored in file /home/dovetail/results/all_hosts_info.json.
+ 2019-12-04 07:31:14,612 - dovetail.container.Container - WARNING - There is no hosts file /home/ovp/pre_config/hosts.yaml. This may cause some issues with domain name resolution.
+ 2019-12-04 07:32:13,804 - dovetail.report.Report - INFO - Results have been stored with files: ['/home/ovp/results/functest_results.txt'].
+ 2019-12-04 07:32:13,808 - dovetail.report.Report - INFO -
Dovetail Report
- Version: 1.0.0
- Build Tag: daily-master-660de986-5d98-11e8-b635-0242ac110001
- Upload Date: 2018-05-22 08:31:17 UTC
- Duration: 698.01 s
+ Version: 2019.12
+ Build Tag: daily-master-0c9184e6-1668-11ea-b1cd-0242ac110002
+ Test Date: 2019-12-04 07:32:13 UTC
+ Duration: 60.20 s
Pass Rate: 100.00% (1/1)
vping: pass rate 100.00%
@@ -680,28 +707,27 @@ result file on the Test Host.
When test execution is complete, a tar file with all result and log files is written in
``$DOVETAIL_HOME`` on the Test Host. An example filename is
-``${DOVETAIL_HOME}/logs_20180105_0858.tar.gz``. The file is named using a
-timestamp that follows the convention 'YearMonthDay-HourMinute'. In this case, it was generated
-at 08:58 on January 5th, 2018. This tar file is used to upload to the OVP portal.
+``${DOVETAIL_HOME}/logs_20191204_0732.tar.gz``. The file is named using a timestamp
+that follows the convention 'YearMonthDay_HourMinute'. In this case, it was generated
+at 07:32 on December 4th, 2019. This tar file is used for uploading the logs and
+results to the OVP portal.
Making Sense of OVP Test Results
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When a tester is performing trial runs, Dovetail stores results in local files on the Test
-Host by default within the directory specified below.
-
+Host by default within directory ``$DOVETAIL_HOME/results``.
-.. code-block:: bash
+ * Log file: dovetail.log
- cd $DOVETAIL_HOME/results
+ * Review the dovetail.log to see if all important information has been captured
-#. Local file
+ * In default mode without DEBUG.
- * Log file: dovetail.log
+ * Adding option ``-d/--debug`` to change the mode to be DEBUG.
- * Review the dovetail.log to see if all important information has been captured
- - in default mode without DEBUG.
+ * Result file: results.json
* Review the results.json to see all results data including criteria for PASS or FAIL.
@@ -711,11 +737,11 @@ Host by default within the directory specified below.
``security_logs/functest.security.XXX.html`` respectively,
which has the passed, skipped and failed test cases results.
- * This kind of files need to be opened with a web browser.
+ * This kind of files need to be opened with a web browser.
- * The skipped test cases have the reason for the users to see why these test cases skipped.
+ * The skipped test cases are accompanied with the reason tag for the users to see why these test cases skipped.
- * The failed test cases have rich debug information for the users to see why these test cases fail.
+ * The failed test cases have rich debug information for the users to see why these test cases failed.
* Vping test cases
@@ -729,39 +755,31 @@ Host by default within the directory specified below.
* Its log is stored in ``stress_logs/bottlenecks.stress.XXX.log``.
- * Snaps test cases
-
- * Its log is stored in ``snaps_logs/functest.snaps.smoke.log``.
-
* VNF test cases
* Its log is stored in ``vnf_logs/functest.vnf.XXX.log``.
- * Bgpvpn test cases
-
- * Can see the log details in ``bgpvpn_logs/functest.bgpvpn.XXX.log``.
-
OVP Portal Web Interface
------------------------
The OVP portal is a public web interface for the community to collaborate on results
and to submit results for official OPNFV compliance verification. The portal can be used as a
-resource by users and testers to navigate and inspect results more easily than by manually
+resource by users to navigate and inspect results more easily than by manually
inspecting the log files. The portal also allows users to share results in a private manner
until they are ready to submit results for peer community review.
* Web Site URL
- * https://verified.opnfv.org
+ * https://nfvi-verified.lfnetworking.org
* Sign In / Sign Up Links
- * Accounts are exposed through Linux Foundation or OpenStack account credentials.
+ * Accounts are exposed through Linux Foundation.
* If you already have a Linux Foundation ID, you can sign in directly with your ID.
- * If you do not have a Linux Foundation ID, you can sign up for a new one using 'Sign Up'
+ * If you do not have a Linux Foundation ID, you can sign up for a new one using 'Sign Up'.
* My Results Tab
@@ -769,20 +787,25 @@ until they are ready to submit results for peer community review.
* This page lists all results uploaded by you after signing in.
- * You can also upload results on this page with the two steps below.
+ * Following the two steps below, the results are uploaded and in status 'private'.
+
+ * Obtain results tar file located at ``${DOVETAIL_HOME}/``, e.g. ``logs_20180105_0858.tar.gz``.
+
+ * Use the *Choose File* button where a file selection dialog allows you to choose your result file from the hard-disk. Then click the *Upload result* button and see a results ID once your upload succeeds.
+
+ * Results are remaining in status 'private' until they are submitted for review.
- * Obtain results tar file located at ``${DOVETAIL_HOME}/``, example ``logs_20180105_0858.tar.gz``
+ * Use the *Operation* column drop-down option *submit to review*, to expose results to
+ OPNFV community peer reviewers. Use the *withdraw submit* option to reverse this action.
- * Use the *Choose File* button where a file selection dialog allows you to choose your result
- file from the hard-disk. Then click the *Upload* button and see a results ID once your
- upload succeeds.
+ * Results status are changed to be 'review' after submit to review.
- * Results are status 'private' until they are submitted for review.
+ * Use the *View Reviews* to find the review status including reviewers' names and the outcome.
- * Use the *Operation* column drop-down option 'submit to review', to expose results to
- OPNFV community peer reviewers. Use the 'withdraw submit' option to reverse this action.
+ * The administrator will approve the results which have got 2 positive outcome from 2 reviewers.
+ Then the status will be changed to be 'verified'.
- * Use the *Operation* column drop-down option 'share with' to share results with other
+ * Use the *Operation* column drop-down option *share with* to share results with other
users by supplying either the login user ID or the email address associated with
the share target account. The result is exposed to the share target but remains private
otherwise.
@@ -791,18 +814,19 @@ until they are ready to submit results for peer community review.
* This page shows your account info after you sign in.
+ * There are 3 different roles: administrator, user and reviewer.
+
Updating Dovetail or a Test Suite
---------------------------------
Follow the instructions in section `Installing Dovetail on the Test Host`_ and
-`Running the OVP Test Suite`_ by replacing the docker images with new_tags,
+`Running the OVP Test Suite`_ by replacing the docker images with new_tags:
.. code-block:: bash
sudo docker pull opnfv/dovetail:<dovetail_new_tag>
sudo docker pull opnfv/functest:<functest_new_tag>
sudo docker pull opnfv/yardstick:<yardstick_new_tag>
+ sudo docker pull opnfv/bottlenecks:<bottlenecks_new_tag>
This step is necessary if dovetail software or the OVP test suite have updates.
-
-
diff --git a/docs/testing/user/userguide/vnf_test_guide.rst b/docs/testing/user/userguide/vnf_test_guide.rst
new file mode 100644
index 00000000..00c4e4ef
--- /dev/null
+++ b/docs/testing/user/userguide/vnf_test_guide.rst
@@ -0,0 +1,714 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, ONAP, and others.
+
+.. _dovetail-vnf_testers_guide:
+
+===================================
+Conducting ONAP VNF Testing for OVP
+===================================
+
+Overview
+--------
+
+As the LFN verification framework, the Dovetail team has worked with the ONAP VVP, and VTP
+projects to enable VNF testing, results submission, and results review to be completed
+throught the same web portal and processes used for the NFVI testing.
+For more details about VNF SDK and VVP, please refer to `ONAP VNF SDK Compliance Verification Program
+<https://docs.onap.org/en/elalto/submodules/vnfsdk/model.git/docs/files/VNFSDK-LFN-CVC.html>`_
+and `ONAP VVP <https://docs.onap.org/en/elalto/submodules/vvp/documentation.git/docs/index.html>`_.
+
+Testing is available for both HEAT and TOSCA defined VNFs, but the process is different depending
+on the template language. This userguide covers the testing process for both VNF types in the
+two sections below.
+
+
+Definitions and abbreviations
+-----------------------------
+
+- LFN - Linux Foundation Networking
+- ONAP - Open Network Automation Platform
+- OVP - OPNFV Verification Program
+- VNF - Virtual Network Function
+- VNF SDK - VNF Software Development Kit
+- VTP - VNF Test Platform
+- VVP - VNF Validation Program
+
+Testing of HEAT based VNFs
+--------------------------
+
+Environment Preparation
+^^^^^^^^^^^^^^^^^^^^^^^
+
+Prerequisites
+"""""""""""""
+
+- `ONAP ElAlto Release deployed via OOM <https://onap.readthedocs.io/en/latest/submodules/oom.git/docs/oom_quickstart_guide.html>`_
+- An OpenStack deployment is available and privisioned as ONAP's Cloud Site
+- kubectl is installed on the system used to start the testing
+- bash
+- VNF Heat Templates
+- Preload json files
+
+After deploying ONAP, you need to configure ONAP with:
+
+- A cloud owner
+- A cloud region
+- A subscriber
+- A service type
+- A project name
+- An owning entity
+- A platform
+- A line of business
+- A cloud site
+
+If you're not familiar with how to configure ONAP, there are guides that use
+`robot <https://onap.readthedocs.io/en/elalto/submodules/integration.git/docs/docs_robot.html>`_
+or `direct api <https://wiki.onap.org/pages/viewpage.action?pageId=25431491>`_ requests available
+to help, as well as a guide for adding a new OpenStack site to ONAP.
+
+VVP Test Tool Setup
+"""""""""""""""""""
+
+On your local machine, or the system from which you will run the tests, you will need to clone the
+ONAP OOM project repo:
+
+.. code-block:: bash
+
+ git clone --branch 5.0.1-ONAP ssh://<username>@gerrit.onap.org:29418/oom --recurse-submodules
+
+VNF Preparation
+^^^^^^^^^^^^^^^
+
+The vnf lifecycle validation testsuite requires the VNF to be packaged into a specific directory
+hierarchy, shown below.
+
+.. code-block::
+
+ vnf_folder
+ ├── /templates
+ | └── base.yaml
+ | └── base.env
+ | └── incremental_0.yaml
+ | └── incremental_0.env
+ | └── ...
+ ├── /preloads
+ | └── base_preload.json
+ | └── incremental_0_preload.json
+ | └── ...
+ └── vnf-details.json
+
+- The name for vnf_folder is free-form, and can be located anywhere on your computer. The path to this folder will be passed to the testsuite as an argument.
+- /templates should contain your VVP-compliant VNF heat templates.
+- /preloads should contain a preload file for each VNF module (TODO: add link to preload documentation).
+ - For a VNF-API preload: vnf-name, vnf-type, generic-vnf-type, and generic-vnf-name should be empty strings.
+ - For a GR-API preload: vnf-name, vnf-type, vf-module-type, and vf-module-name should be empty strings.
+ - This information will be populated at runtime by the testsuite.
+- vnf-details should be a json file with the information that will be used by ONAP to instantiate the VNF. The structure of vnf-details is shown below.
+- VNF disk image must be uploaded and available in the OpenStack project being managed by ONAP
+- Modules must contain an entry for each module of the VNF. Only one module can be a base module.
+- api_type should match the format of the preloads that are provided in the package.
+- The other information should match what was used to configure ONAP during the pre-requisite section of this guide.
+
+.. code-block:: json
+
+ {
+ "vnf_name": "The Vnf Name",
+ "description": "Description of the VNF",
+ "modules": [
+ {
+ "filename": "base.yaml",
+ "isBase": "true",
+ "preload": "base_preload.json"
+ },
+ {
+ "filename": "incremental_0.yaml",
+ "isBase": "false",
+ "preload": "incremental_0.json"
+ },
+ ...
+ ],
+ "api_type": "[gr_api] or [vnf_api]",
+ "subscriber": "<subscriber name>",
+ "service_type": "<service type>",
+ "tenant_name": "<name of tenant>",
+ "region_id": "<name of region>",
+ "cloud_owner": "<name of cloud owner>",
+ "project_name": "<name of project>",
+ "owning_entity": "<name of owning entity>",
+ "platform": "<name of platform>",
+ "line_of_business": "<name of line of business>",
+ "os_password": "<openstack password>"
+ }
+
+Runnign the HEAT VNF Test
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The ONAP OOM Robot framework will run the test, using kubectl to manage the execution. The framework
+will copy your VNF template files to the robot container required to execute the test.
+
+.. code-block:: bash
+
+ cd oom/kubernetes/robot
+ $ ./instantiate-k8s.sh --help
+ ./instantiate-k8s.sh [options]
+
+ required:
+ -n, --namespace <namespace> namespace that robot pod is running under.
+ -f, --folder <folder> path to folder containing heat templates, preloads, and vnf-details.json.
+
+ additional options:
+ -p, --poll some cloud environments (like azure) have a short time out value when executing
+ kubectl. If your shell exits before the testsuite finishes, using this option
+ will poll the testsuite logs every 30 seconds until the test finishes.
+ -t, --tag <tag> robot testcase tag to execute (default is instantiate_vnf).
+
+ This script executes the VNF instantiation robot testsuite.
+ - It copies the VNF folder to the robot container that is part of the ONAP deployment.
+ - It models, distributes, and instantiates a heat-based VNF.
+ - It copies the logs to an output directory, and creates a tarball for upload to the OVP portal.
+
+
+**Sample execution:**
+
+.. code-block:: bash
+
+ $ ./instantiate-k8s.sh --namespace onap --folder /tmp/vnf-instantiation/examples/VNF_API/pass/multi_module/ --poll
+ ...
+ ...
+ ...
+ ...
+ ------------------------------------------------------------------------------
+ Testsuites.Vnf Instantiation :: The main driver for instantiating ... | PASS |
+ 1 critical test, 1 passed, 0 failed
+ 1 test total, 1 passed, 0 failed
+ ==============================================================================
+ Testsuites | PASS |
+ 1 critical test, 1 passed, 0 failed
+ 1 test total, 1 passed, 0 failed
+ ==============================================================================
+ Output: /share/logs/0003_ete_instantiate_vnf/output.xml
+ + set +x
+ testsuite has finished
+ Copying Results from pod...
+ /tmp/vnf-instantiation /tmp/vnf-instantiation
+ a log.html
+ a results.json
+ a stack_report.json
+ a validation-scripts.json
+ /tmp/vnf-instantiation
+ VNF test results: /tmp/vnfdata.46749/vnf_heat_results.tar.gz
+
+The testsuite takes about 10-15 minutes for a simple VNF, and will take longer for a more complicated VNF.
+
+Reporting Results
+"""""""""""""""""
+Once the testsuite is finished, it will create a directory and tarball in /tmp (the name of the directory
+and file is shown at the end of the stdout of the script). There will be a results.json in that directory
+that has the ultimate outcome of the test, in the structure shown below.
+
+**Log Files**
+
+The output tar file will have 4 log files in it.
+
+- results.json: This is high-level results file of all of the test steps, and is consumed by the OVP portal.
+- report.json: This is the output of the vvp validation scripts.
+- stack_report.json: This is the output from querying openstack to validate the heat modules.
+- log.html: This is the robot log, and contains each execution step of the testcase.
+
+If the result is "PASS", that means the testsuite was successful and the tarball is ready for submission
+to the OVP portal.
+
+**results.json**
+
+.. code-block:: json
+
+ {
+ "vnf_checksum": "afc57604a3b3b7401d5b8648328807b594d7711355a2315095ac57db4c334a50",
+ "build_tag": "vnf-validation-53270",
+ "version": "2019.09",
+ "test_date": "2019-09-04 17:50:10.575",
+ "duration": 437.002,
+ "vnf_type": "heat",
+ "testcases_list": [
+ {
+ "mandatory": "true",
+ "name": "onap-vvp.validate.heat",
+ "result": "PASS",
+ "objective": "onap heat template validation",
+ "sub_testcase": [],
+ "portal_key_file": "report.json"
+ },
+ {
+ "mandatory": "true",
+ "name": "onap-vvp.lifecycle_validate.heat",
+ "result": "PASS",
+ "objective": "onap vnf lifecycle validation",
+ "sub_testcase": [
+ {
+ "name": "model-and-distribute",
+ "result": "PASS"
+ },
+ {
+ "name": "instantiation",
+ "result": "PASS"
+ }
+ ],
+ "portal_key_file": "log.html"
+ },
+ {
+ "mandatory": "true",
+ "name": "stack_validation",
+ "result": "PASS",
+ "objective": "onap vnf openstack validation",
+ "sub_testcase": [],
+ "portal_key_file": "stack_report.json"
+ }
+ ]
+ }
+
+
+Additional Resources
+^^^^^^^^^^^^^^^^^^^^
+
+- `ONAP VVP Project <https://wiki.onap.org/display/DW/VNF+Validation+Program+Project>`_
+- `VVP Wiki Users Guide (this will track current ONAP master) <https://wiki.onap.org/pages/viewpage.action?pageId=68546123>`_
+
+Sample VNF templates are available on the VVP Wiki Users Guide page.
+
+Testing of TOSCA based VNFs
+---------------------------
+
+VNF Test Platform (VTP) provides an platform to on-board different test cases required for
+OVP for various VNF testing provided by VNFSDK (for TOSCA) and VVP(for HEAT) projects in
+ONAP. And it generates the test case outputs which would be uploaded into OVP portal for
+VNF badging.
+
+TOSCA VNF Test Environment
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+As pre-requestiests steps, Its assumed that, successful ONAP, Vendor VNFM and OpenStack
+cloud are already available. Below installation steps help to setup VTP components and CLI.
+
+.. image:: images/tocsa_vnf_test_environment.png
+ :align: center
+ :scale: 100%
+
+Installation
+^^^^^^^^^^^^
+
+Clone the VNFSDK repo.
+
+.. code-block:: bash
+
+ git clone --branch elalto https://git.onap.org/vnfsdk/refrepo
+
+Install the VTP by using script *refrepo/vnfmarket-be/deployment/install/vtp_install.sh*
+
+Follow the steps as below (in sequence):
+
+- vtp_install.sh --download : It will download all required artifacts into /opt/vtp_stage
+- vtp_install.sh --install : It will install VTP (/opt/controller) and CLI (/opt/oclip)
+- vtp_install.sh --start : It will start VTP controller as tomcat service and CLI as oclip service
+- vtp_install.sh --verify : It will verify the setup is done properly by running some test cases.
+
+Last step (verify) would check the health of VTP components and TOSCA VNF compliance and validation test cases.
+
+Check Available Test Cases
+""""""""""""""""""""""""""
+
+VTP supports to check the compliance of VNF and PNF based on ONAP VNFREQS.
+
+To check:
+
+- Go to command console
+- Run command oclip
+- Now it will provide a command prompt:
+
+*oclip:open-cli>*
+
+Now run command as below and check the supported compliance testcases for VNFREQS.
+
+- csar-validate - Helps to validate given VNF CSAR for all configured VNFREQS.
+- csar-validate-rxxx - Helps to validate given VNF CSAR for a given VNFREQS xxx.
+
+.. code-block:: bash
+
+ oclip:open-cli>schema-list --product onap-dublin --service vnf-compliance
+ +--------------+----------------+------------------------+--------------+----------+------+
+ |product |service |command |ocs-version |enabled |rpc |
+ +--------------+----------------+------------------------+--------------+----------+------+
+ |onap-dublin |vnf-compliance |csar-validate-r10087 |1.0 |true | |
+ +--------------+----------------+------------------------+--------------+----------+------+
+ |onap-dublin |vnf-compliance |csar-validate |1.0 |true | |
+ +--------------+----------------+------------------------+--------------+----------+------+
+ |onap-dublin |vnf-compliance |csar-validate-r26885 |1.0 |true | |
+ +--------------+----------------+------------------------+--------------+----------+------+
+ |onap-dublin |vnf-compliance |csar-validate-r54356 |1.0 |true | |
+ ...
+
+To know the details of each VNFREQS, run as below.
+
+.. code-block:: bash
+
+ oclip:open-cli>use onap-dublin
+ oclip:onap-dublin>csar-validate-r54356 --help
+ usage: oclip csar-validate-r54356
+
+ Data types used by NFV node and is based on TOSCA/YAML constructs specified in draft GS NFV-SOL 001.
+ The node data definitions/attributes used in VNFD MUST comply.
+
+Now run command as below and check the supported validation testcases
+
+.. code-block:: bash
+
+ oclip:onap-dublin>use open-cli
+ oclip:open-cli>schema-list --product onap-dublin --service vnf-validation
+ +--------------+----------------+----------------------+--------------+----------+------+
+ |product |service |command |ocs-version |enabled |rpc |
+ +--------------+----------------+----------------------+--------------+----------+------+
+ |onap-dublin |vnf-validation |vnf-tosca-provision |1.0 |true | |
+ +--------------+----------------+----------------------+--------------+----------+------+
+
+Configure ONAP with required VNFM and cloud details
+"""""""""""""""""""""""""""""""""""""""""""""""""""
+
+**1. Setup the OCOMP profile onap-dublin**
+
+Run following command to configure the ONAP service URL and creadentials as given below, which will be
+used by VTP while executing the test cases
+
+.. code-block:: bash
+
+ oclip:open-cli>use onap-dublin
+ oclip:onap-dublin>profile onap-dublin
+ oclip:onap-dublin>set sdc.onboarding:host-url=http://159.138.8.8:30280
+ oclip:onap-dublin>set sdc.onboarding:host-username=cs0008
+ oclip:onap-dublin>set sdc.onboarding:host-password=demo123456!
+ oclip:onap-dublin>set sdc.catalog:host-url=http://159.138.8.8:30205
+ oclip:onap-dublin>set sdc.catalog:host-password=demo123456\!
+ oclip:onap-dublin>set sdc.catalog:host-username=cs0008
+ oclip:onap-dublin>set sdc.catalog:service-model-approve:host-username=gv0001
+ oclip:onap-dublin>set sdc.catalog:service-model-distribute:host-username=op0001
+ oclip:onap-dublin>set sdc.catalog:service-model-test-start:host-username=jm0007
+ oclip:onap-dublin>set sdc.catalog:service-model-test-accept:host-username=jm0007
+ oclip:onap-dublin>set sdc.catalog:service-model-add-artifact:host-username=ocomp
+ oclip:onap-dublin>set sdc.catalog:vf-model-add-artifact:host-username=ocomp
+ oclip:onap-dublin>set aai:host-url=https://159.138.8.8:30233
+ oclip:onap-dublin>set aai:host-username=AAI
+ oclip:onap-dublin>set aai:host-password=AAI
+ oclip:onap-dublin>set vfc:host-url=http://159.138.8.8:30280
+ oclip:onap-dublin>set multicloud:host-url=http://159.138.8.8:30280
+
+NOTE: Mostly all above entries value would be same execept the IP address used in the
+URL, which would be ONAP k8s cluser IP.
+
+By default, SDC onboarding service does not provide node port, which is available to
+access from external ONAP network. so to enable for external access, register the SDC
+onboarding service into MSB and use MSB url for sdc.onboarding:host-url.
+
+.. code-block:: bash
+
+ oclip:onap-dublin> microservice-create --service-name sdcob --service-version v1.0 --service-url /onboarding-api/v1.0 --path /onboarding-api/v1.0 --node-ip 172.16.1.0 --node-port 8081
+
+NOTE: To find the node-ip and node-port, use the following steps.
+
+Find out SDC onboarding service IP and port details as given here:
+
+.. code-block:: bash
+
+ [root@onap-dublin-vfw-93996-50c1z ~]# kubectl get pods -n onap -o wide | grep sdc-onboarding-be
+ dev-sdc-sdc-onboarding-be-5564b877c8-vpwr5 2/2 Running 0 29d 172.16.1.0 192.168.2.163 <none> <none>
+ dev-sdc-sdc-onboarding-be-cassandra-init-mtvz6 0/1 Completed 0 29d 172.16.0.220 192.168.2.163 <none> <none>
+ [root@onap-dublin-vfw-93996-50c1z ~]#
+
+Note down the IP address for sdc-onboarding-be 172.16.1.0
+
+.. code-block:: bash
+
+ [root@onap-dublin-vfw-93996-50c1z ~]# kubectl get services -n onap -o wide | grep sdc-onboarding-be
+ sdc-onboarding-be ClusterIP 10.247.198.92 <none> 8445/TCP,8081/TCP 29d app=sdc-onboarding-be,release=dev-sdc
+ [root@onap-dublin-vfw-93996-50c1z ~]#
+
+Note down the port for sdc-onboarding-be 8445 8081
+
+Similarly, other service IP and Port could be discovered like above, in case not know earlier :)
+
+Verify these details once by typing 'set'
+
+.. code-block:: bash
+
+ oclip:onap-dublin> set
+
+This profile would be used by user while running the test cases with ONAP setup configured in it, as below
+oclip --profile onap-dublin vnf-tosca-provision ....
+
+**2. Setup SDC consumer**
+
+SDC uses consumer concept to configure required VN model and service model artifacts. So
+following commands required to run, which will create consumer named ocomp, which is
+already configured in onap-dublin profile created in above steps.
+
+.. code-block:: bash
+
+ oclip --product onap-dublin --profile onap-dublin sdc-consumer-create --consumer-name ocomp
+
+NOTE: command oclip could be used in scripting mode as above or in interactive mode as used
+in earlier steps
+
+**3. Update the cloud and vnfm driver details**
+
+In the configuration file /opt/oclip/conf/vnf-tosca-provision.json, update the cloud
+and VNFM details.
+
+.. code-block:: json
+
+ "cloud": {
+ "identity-url": "http://10.12.11.1:5000/v3",
+ "username": "admin",
+ "password": "password",
+ "region": "RegionOVP",
+ "version": "ocata",
+ "tenant": "ocomp"
+ },
+ "vnfm":{
+ "hwvnfmdriver":{
+ "version": "v1.0",
+ "url": "http://159.138.8.8:38088",
+ "username": "admin",
+ "password": "xxxx"
+ },
+ "gvnfmdriver":{
+ "version": "v1.0",
+ "url": "http://159.138.8.8:30280"
+ }
+ }
+
+**4.Configure the decided VNFRES (optional)**
+VTP allows to configure the set of VNFREQS to be considered while running the VNF
+compliance test cases in the configuration file /opt/oclip/conf/vnfreqs.properties.
+
+If not available, please create this file with following entries:
+
+.. code-block:: bash
+
+ vnfreqs.enabled=r02454,r04298,r07879,r09467,r13390,r23823,r26881,r27310,r35851,r40293,r43958,r66070,r77707,r77786,r87234,r10087,r21322,r26885,r40820,r35854,r65486,r17852,r46527,r15837,r54356,r67895,r95321,r32155,r01123,r51347,r787965,r130206
+ pnfreqs.enabled=r10087,r87234,r35854,r15837,r17852,r293901,r146092,r57019,r787965,r130206
+ # ignored all chef and ansible related tests
+ vnferrors.ignored=
+ pnferrors.ignored=
+
+Runnign the TOSCA VNF Test
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Every test provided in VTP is given with guidelines on how to use it. On every execution of test cases, use the following additional arguments based on requirements
+
+- --product onap-dublin - It helps VTP choose the test cases written for onap-dublin version
+- --profile onap-dublin - It helps VTP to use the profile settings provided by admin (optional)
+- --request-id - It helps VTP to track the progress of the test cases execution and user could use this id for same. (optional)
+
+So, final test case execution would be as below. To find the test case arguments details, run second command below.
+
+.. code-block:: bash
+
+ oclip --product onap-dublin --profile onap-dublin --request-id req-1 <test case name> <test case arguments>
+ oclip --product onap-dublin <test case name> --help
+
+Running TOSCA VNF Compliance Testing
+""""""""""""""""""""""""""""""""""""
+
+To run compliance test as below with given CSAR file
+
+.. clode-block:: bash
+
+ oclip --product onap-dublin csar-validate --csar <csar file complete path>
+
+It will produce the result format as below:
+
+.. code-block:: json
+
+ {
+ "date": "Fri Sep 20 17:34:24 CST 2019",
+ "criteria": "PASS",
+ "contact": "ONAP VTP Team onap-discuss@lists.onap.org",
+ "results": [
+ {
+ "description": "V2.4.1 (2018-02)",
+ "passed": true,
+ "vnfreqName": "SOL004",
+ "errors": []
+ },
+ {
+ "description": "If the VNF or PNF CSAR Package utilizes Option 2 for package security, then the complete CSAR file MUST be digitally signed with the VNF or PNF provider private key. The VNF or PNF provider delivers one zip file consisting of the CSAR file, a signature file and a certificate file that includes the VNF or PNF provider public key. The certificate may also be included in the signature container, if the signature format allows that. The VNF or PNF provider creates a zip file consisting of the CSAR file with .csar extension, signature and certificate files. The signature and certificate files must be siblings of the CSAR file with extensions .cms and .cert respectively.\n",
+ "passed": true,
+ "vnfreqName": "r787965",
+ "errors": []
+ }
+ ],
+ "platform": "VNFSDK - VNF Test Platform (VTP) 1.0",
+ "vnf": {
+ "mode": "WITH_TOSCA_META_DIR",
+ "vendor": "ONAP",
+ "name": null,
+ "type": "TOSCA",
+ "version": null
+ }
+ }
+
+In case of errors, the errors section will have list of details as below. Each error block, will be
+given with error code and error details. Error code would be very useful to provide the troubleshooting
+guide in future. Note, to generate the test result in OVP archieve format, its recommended to run this compliance
+test with request-id similar to running validation test as below.
+
+.. code-block:: bash
+
+ [
+ {
+ "vnfreqNo": "R66070",
+ "code": "0x1000",
+ "message": "MissinEntry-Definitions file",
+ "lineNumber": -1
+ }
+ ]
+
+Running TOSCA VNF Validation Testing
+""""""""""""""""""""""""""""""""""""
+VTP provides validation test case with following modes:
+
+.. image:: images/tosca_vnf_test_flow.png
+ :align: left
+ :scale: 100%
+
+
+setup: Create requires Vendor, Service Subscription and VNF cloud in ONAP
+standup: From the given VSP csar, VNF csar and NS csar, it creates VF Model, NS Model and NS service
+cleanup: Remove those entries created during provision
+provision: Runs setup -> standup
+validate: Runs setup -> standup -> cleanup
+checkup: mode helps to verify automation is deployed properly.
+
+For OVP badging, validate mode would be used as below:
+
+.. code-block:: bash
+
+ oclip --request-id WkVVu9fD--product onap-dublin --profile onap-dublin vnf-tosca-provision --vsp <vsp csar> --vnf-csar <v
+
+Validation testing would take for a while to complete the test execution, so user could use the above
+given request-id, to tracking the progress as below:
+
+.. code-block:: bash
+
+ oclip execution-list --request-id WkVVu9fD
+ +------------+------------------------+--------------+------------------+------------------------------+--------------+------------+--------------------------+--------------------------+
+ |request-id |execution-id |product |service |command |profile |status |start-time |end-time |
+ +------------+------------------------+--------------+------------------+------------------------------+--------------+------------+--------------------------+--------------------------+
+ |WkVVu9fD |WkVVu9fD-1568731678753 |onap-dublin |vnf-validation |vnf-tosca-provision | |in-progress |2019-09-17T14:47:58.000 | |
+ +------------+------------------------+--------------+------------------+------------------------------+--------------+------------+--------------------------+--------------------------+
+ |WkVVu9fD |WkVVu9fD-1568731876397 |onap-dublin |sdc.catalog |service-model-test-request |onap-dublin |in-progress |2019-09-17T14:51:16.000 | |
+ +------------+------------------------+--------------+------------------+------------------------------+--------------+------------+--------------------------+--------------------------+
+ |WkVVu9fD |WkVVu9fD-1568731966966 |onap-dublin |sdc.onboarding |vsp-archive |onap-dublin |completed |2019-09-17T14:52:46.000 |2019-09-17T14:52:47.000 |
+ +------------+------------------------+--------------+------------------+------------------------------+--------------+------------+--------------------------+--------------------------+
+ |WkVVu9fD |WkVVu9fD-1568731976982 |onap-dublin |aai |subscription-delete |onap-dublin |completed |2019-09-17T14:52:56.000 |2019-09-17T14:52:57.000 |
+ +------------+------------------------+--------------+------------------+------------------------------+--------------+------------+--------------------------+--------------------------+
+ |WkVVu9fD |WkVVu9fD-1568731785780 |onap-dublin |aai |vnfm-create |onap-dublin |completed |2019-09-17T14:49:45.000 |2019-09-17T14:49:46.000 |
+ ......
+
+While executing the test cases, VTP provides unique execution-id (2nd column) for each step. As you note
+in the example above, some steps are in-progress, while others are completed already. If there is error
+then status will be set to failed.
+
+To find out the foot-print of each step, following commands are available
+
+.. code-block:: bash
+
+ oclip execution-show-out --execution-id WkVVu9fD-1568731785780 - Reports the standard output logs
+ oclip execution-show-err --execution-id WkVVu9fD-1568731785780 - Reports the standard error logs
+ oclip execution-show-debug --execution-id WkVVu9fD-1568731785780 - Reports the debug details like HTTP request and responseoclip execution-show --execution-id WkVVu9fD-1568731785780 - Reports the complete foot-print of inputs, outputs of steps
+
+Track the progress of the vnf-tosca-provision test cases until its completed. Then the out of the validation
+test cases could be retrieved as below:
+
+.. code-block:: bash
+
+ oclip execution-show --execution-id WkVVu9fD-1568731678753 - use vnf tosca test case execution id here
+
+It will provides the output format as below:
+
+.. code-block:: json
+
+ {
+ "output": {
+ "ns-id": null,
+ "vnf-id": "",
+ "vnfm-driver": "hwvnfmdriver",
+ "vnf-vendor-name": "huawei",
+ "onap-objects": {
+ "ns_instance_id": null,
+ "tenant_version": null,
+ "service_type_id": null,
+ "tenant_id": null,
+ "subscription_version": null,
+ "esr_vnfm_id": null,
+ "location_id": null,
+ "ns_version": null,
+ "vnf_status": "active",
+ "entitlement_id": null,
+ "ns_id": null,
+ "cloud_version": null,
+ "cloud_id": null,
+ "vlm_version": null,
+ "esr_vnfm_version": null,
+ "vlm_id": null,
+ "vsp_id": null,
+ "vf_id": null,
+ "ns_instance_status": "active",
+ "service_type_version": null,
+ "ns_uuid": null,
+ "location_version": null,
+ "feature_group_id": null,
+ "vf_version": null,
+ "vsp_version": null,
+ "agreement_id": null,
+ "vf_uuid": null,
+ "ns_vf_resource_id": null,
+ "vsp_version_id": null,
+ "customer_version": null,
+ "vf_inputs": null,
+ "customer_id": null,
+ "key_group_id": null,
+ },
+ "vnf-status": "active",
+ "vnf-name": "vgw",
+ "ns-status": "active"
+ },
+ "input": {
+ "mode": "validate",
+ "vsp": "/tmp/data/vtp-tmp-files/1568731645518.csar",
+ "vnfm-driver": "hwvnfmdriver",
+ "config-json": "/opt/oclip/conf/vnf-tosca-provision.json",
+ "vnf-vendor-name": "huawei",
+ "ns-csar": "/tmp/data/vtp-tmp-files/1568731660745.csar",
+ "onap-objects": "{}",
+ "timeout": "600000",
+ "vnf-name": "vgw",
+ "vnf-csar": "/tmp/data/vtp-tmp-files/1568731655310.csar"
+ },
+ "product": "onap-dublin",
+ "start-time": "2019-09-17T14:47:58.000",
+ "service": "vnf-validation",
+ "end-time": "2019-09-17T14:53:46.000",
+ "request-id": "WkVVu9fD-1568731678753",
+ "command": "vnf-tosca-provision",
+ "status": "completed"
+ }
+
+Reporting Results
+"""""""""""""""""
+
+VTP provides translation tool to migrate the VTP result into OVP portal format and generates the tar file
+for the given test case execution. Please refer `<https://github.com/onap/vnfsdk-refrepo/tree/master/vnfmarket-be/deployment/vtp2ovp>`_ for more details.
+
+Once tar is generated, it can be used to submit into OVP portal `<https://vnf-verified.lfnetworking.org/>`_
+
+.. References
+.. _`OVP VNF portal`: https://vnf-verified.lfnetworking.org