aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authormorganrOL <morgan.richomme@orange.com>2015-04-29 11:03:39 +0200
committermorganrOL <morgan.richomme@orange.com>2015-04-29 11:25:50 +0200
commitf7f169a67a6f96c45979b6a9567b5194b6e3f4a3 (patch)
treec0d6d2db799b7f66b64e630c3cc3e288df4671a7
parent9c18aa08865d2dd6fe2f2c5c4691ffb234c8c157 (diff)
update python script to manage Rally bench tests, add help section, exclude Tempest from all bench tests
JIRA: FUNCTEST-1 Change-Id: I479c3216706635738321fc96f6e02f84bbd707a4 Signed-off-by: morganrOL <morgan.richomme@orange.com>
-rw-r--r--docs/functest.rst60
-rw-r--r--testcases/VIM/OpenStack/CI/libraries/run_rally.py150
2 files changed, 127 insertions, 83 deletions
diff --git a/docs/functest.rst b/docs/functest.rst
index 34ff8d6c..19b85d96 100644
--- a/docs/functest.rst
+++ b/docs/functest.rst
@@ -2,10 +2,10 @@
OPNFV functional test guide
===========================
-Testing is a key challenge of OPNFV.
+Testing is a key challenge of OPNFV.
It shall be possible to run functional tests on any OPNFV solution.
-The goal of this document consists in
+The goal of this document consists in
* a description of functional tests for OPNFV
* a description of the tools needed to perform these tests
* the procedure to configure the tools and the scenarios associated with these tests
@@ -22,7 +22,7 @@ ETSI NFV defined 9 use cases (ref ETSI_):
* VNF as a Service
* NFV as a service
* VNF Forwarding graphs
- * Virtual Network Platform as a Service
+ * Virtual Network Platform as a Service
* Virtualisation of Mobile Core and IMS
* Virtualisation of Mobile station
* Fixed Access NFV
@@ -40,12 +40,12 @@ For release 1 (Arno), 5 test suites have been selected:
* vPing
* vIMS
-The 3 first suites are directly inherited from upstream projects.
+The 3 first suites are directly inherited from upstream projects.
vPing, that is already present in Tempest suite, has been developped to provided a basic "hellow world" functional test example.
.. _`Continuous Integration`: https://build.opnfv.org/ci/view/functest/
-vEPC, vPE, vHGW, vCDN use cases are not considered for first release.
+vEPC, vPE, vHGW, vCDN use cases are not considered for first release.
It does not mean that such use cases cannot be tested on OPNFV Arno.
It means that these use cases have not been integrated in the `Continuous Integration`_ and no specific work (integration or developpment) have been done for R1.
@@ -68,17 +68,17 @@ For release 1, the tools are not automatically installed.
.. _pharos: https://wiki.opnfv.org/pharos
It is recommended to install the different tools on the jump host server as defined in the pharos_ project.
-The high level architecture can be described as follow:
+The high level architecture can be described as follow:
-.. figure:: overall_description.png
+.. figure:: images/overall_description.png
:scale: 50
:alt: overall description
.. _description:
-------------------------------
-Description of the test cases
-------------------------------
+-----------------------------
+Description of the test cases
+-----------------------------
Rally bench test suite
======================
@@ -100,7 +100,7 @@ The goal of this test suite is to test the different modules of OpenStack and ge
This test suite provides performance information on VIM (OpenStack) part.
No SLA were defined for release 1, we just consider whether the tests are passed or failed.
-
+
In the future SLA shall be considered (e.g. accepting booting time for a given image with a given flavour).
Through its integration in Continuous Integration, the evolution of the performance of these tests shall also be considered.
@@ -112,7 +112,7 @@ Tempest
Tempest_ is the OpenStack Integration Test Suite. We use Rally to run Tempest suite.
-The goal of this test is to check the OpenStack installation (sanity checks).
+The goal of this test is to check the OpenStack installation (sanity checks).
OpenDaylight
@@ -123,21 +123,21 @@ vPing
The goal of this test can be described as follow:
-.. figure:: vPing.png
+.. figure:: images/vPing.png
:scale: 50
:alt: vPing description
-
-The vPing test case is already present in Tempest suite.
-
+
+The vPing test case is already present in Tempest suite.
+
This example, using OpenStack python clients can be considered as an "hellow World" example and may be modified for future use.
vIMS
====
-vIMS is one of the testcases defined by ETSI.
+vIMS is one of the testcases defined by ETSI.
-.. figure:: http://fr.wikipedia.org/wiki/IP_Multimedia_Subsystem#/media/File:Ims_overview.png
+.. figure:: images/Ims_overview.png
:scale: 50
:alt: IMS (src wikipedia)
@@ -149,7 +149,7 @@ This functional test will verify that
* The virtual networking component of the platform can provide working IP connectivity between and among the VMs
* The platform as a whole is capable of supporting the running of a real virtualized network function that delivers a typical service offered by a network operator, i.e. voice telephony
-Functional testing of vIMS in OPNFV Release 1 will be limited to a basic, non-scalable and non-fault-tolerant deployment of IMS.
+Functional testing of vIMS in OPNFV Release 1 will be limited to a basic, non-scalable and non-fault-tolerant deployment of IMS.
Furthermore, in this release the vIMS will perform only control plane functions (i.e. processing of SIP signaling messages) and will not be passing RTP media streams.
In future releases, the same software elements can be deployed with multiple instances of each VNF component to provide a fault tolerant and dynamically scalable deployment of IMS. With the addition of virtualized Session Border Controller software elements, the scope of vIMS functional testing can be further expanded to include the handling of RTP media.
@@ -167,7 +167,7 @@ Tooling installation
2 tools are needed for the R1 functional tests:
* Rally
- * Robot
+ * Robot
Rally
@@ -177,15 +177,15 @@ Rally
.. _`OpenRC`: http://docs.openstack.org/user-guide/common/cli_set_environment_variables_using_openstack_rc.html
-The Rally creation can be describe as follow (ref: `Rally installation procedure`_):
- * Create or enter a folder where you want to check out the tool repos.
+The Rally creation can be describe as follow (ref: `Rally installation procedure`_):
+ * Create or enter a folder where you want to check out the tool repos.
* $ git clone https://git.openstack.org/openstack/rally
* $ ./rally/install_rally.sh
* configure your `OpenRC`_ file to let Rally access to your OpenStack, you can either export it from Horizon or build it manually (OpenStack credentials are required)
* $ source Your_OpenRC_file
* $ rally deployment create --fromenv --name=my-opnfv-test
* $ rally-manage tempest install
-
+
You can check if the configuration of rally is fine by typing 'rally deployment check', you shall see the list of available services as follow::
# rally deployment check
@@ -202,7 +202,7 @@ You can check if the configuration of rally is fine by typing 'rally deployment
| nova_ec2 | compute_ec2 | Available |
| novav3 | computev3 | Available |
+-----------+-------------+------------+
-
+
# rally show images
+--------------------------------------+----------------------------------------------+------------+
| UUID | Name | Size (B) |
@@ -245,7 +245,7 @@ Rally bench scenarios have been aggregated in json files.
A script has been developed to simplify the management of the tests and the integration in CI, get it from git::
# wget https://git.opnfv.org/cgit/functest/tree/testcases/VIM/OpenStack/CI/libraries/run_rally.py
-
+
Several scenarios are available (all based on native Rally scenarios):
* glance
* nova
@@ -254,17 +254,17 @@ Several scenarios are available (all based on native Rally scenarios):
* neutron
* vm
* quotas
- * request
+ * request
* tempest
* all (every module except tempest)
You can run the script as follow::
#python run_rally.py keystone
-
+
The script will:
* get the json scenario (if not already available) and put it into the scenario folder
* run rally
- * generate the html result page into the result folder as opnfv-[module name]-[timestamp].html
+ * generate the html result page into the result folder as opnfv-[module name]-[timestamp].html
* generate the json result page into the result folder as opnfv-[module name]-[timestamp].json
* generate OK or KO
@@ -275,7 +275,7 @@ It is possible to use Rally to perform Tempest tests (ref: `tempest installation
You just need to run::
# rally verify start
-
+
The different modes available are smoke, baremetal, compute, data_processing, identity, image, network, object_storage, orchestration, telemetry, and volume. By default if you do not precise anything then smoke tests be selected by default.
.. _`tempest installation guide using Rally`: https://www.mirantis.com/blog/rally-openstack-tempest-testing-made-simpler/
@@ -303,7 +303,7 @@ Test results
Rally bench suite
=================
-Results are available in the result folder through a html page and a json file.
+Results are available in the result folder through a html page and a json file.
Tempest suite
=============
diff --git a/testcases/VIM/OpenStack/CI/libraries/run_rally.py b/testcases/VIM/OpenStack/CI/libraries/run_rally.py
index da5e6adc..898fca6b 100644
--- a/testcases/VIM/OpenStack/CI/libraries/run_rally.py
+++ b/testcases/VIM/OpenStack/CI/libraries/run_rally.py
@@ -8,8 +8,35 @@
# which accompanies this distribution, and is available at
# http://www.apache.org/licenses/LICENSE-2.0
#
-import re, json, os, sys, urllib2
-
+import re, json, os, urllib2, argparse, logging
+
+""" tests configuration """
+tests = ['authenticate', 'glance', 'heat', 'keystone', 'neutron', 'nova', 'tempest', 'vm', 'all']
+parser = argparse.ArgumentParser()
+parser.add_argument("test_name", help="The name of the test you want to perform with rally. "
+ "Possible values are : "
+ "[ {d[0]} | {d[1]} | {d[2]} | {d[3]} | {d[4]} | {d[5]} | {d[6]} "
+ "| {d[7]} | {d[8]} ]. The 'all' value performs all the tests scenarios "
+ "except 'tempest'".format(d=tests))
+
+parser.add_argument("-d", "--debug", help="Debug mode", action="store_true")
+args = parser.parse_args()
+
+""" logging configuration """
+logger = logging.getLogger('run_rally')
+logger.setLevel(logging.DEBUG)
+
+ch = logging.StreamHandler()
+if args.debug:
+ ch.setLevel(logging.DEBUG)
+else:
+ ch.setLevel(logging.INFO)
+
+formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
+ch.setFormatter(formatter)
+logger.addHandler(ch)
+
+
def get_task_id(cmd_raw):
"""
get task id from command rally result
@@ -23,7 +50,8 @@ def get_task_id(cmd_raw):
if match:
return match.group(1)
return None
-
+
+
def task_succeed(json_raw):
"""
Parse JSON from rally JSON results
@@ -43,94 +71,110 @@ def task_succeed(json_raw):
return True
+
def run_task(test_name):
"""
the "main" function of the script who lunch rally for a task
:param test_name: name for the rally test
:return: void
"""
+ logger.info('starting {} test ...'.format(test_name))
""" get the date """
cmd = os.popen("date '+%d%m%Y_%H%M'")
test_date = cmd.read().rstrip()
- """ check directory for test scenarios files or retrieve from git otherwise"""
+ """ check directory for scenarios test files or retrieve from git otherwise"""
+ proceed_test = True
tests_path = "./scenarios"
test_file_name = '{}/opnfv-{}.json'.format(tests_path, test_name)
if not os.path.exists(test_file_name):
- retrieve_test_cases_file(test_name, tests_path)
- print "Scenario successfully downloaded"
-
- print "Start test..."
- cmd = os.popen("rally task start --abort-on-sla-failure %s" % test_file_name)
- task_id = get_task_id(cmd.read())
-
- if task_id is None:
- print "./run_rally : failed to retrieve task_id"
- exit(-1)
-
- """ check for result directory and create it otherwise """
- report_path = "./results"
- if not os.path.exists(report_path):
- os.makedirs(report_path)
-
- report_file_name = '{}/opnfv-{}-{}.html'.format(report_path, test_name, test_date)
-
- os.popen("rally task report %s --out %s" % (task_id, report_file_name))
- cmd = os.popen("rally task results %s" % task_id)
- if task_succeed(cmd.read()):
- print "OK"
+ logger.debug('{} does not exists'.format(test_file_name))
+ proceed_test = retrieve_test_cases_file(test_name, tests_path)
+ logger.debug('successfully downloaded to : {}'.format(test_file_name))
+
+ """ we do the test only if we have a scenario test file """
+ if proceed_test:
+ cmd_line = "rally task start --abort-on-sla-failure %s" % test_file_name
+ logger.debug('running command line : {}'.format(cmd_line))
+ cmd = os.popen(cmd_line)
+ task_id = get_task_id(cmd.read())
+ logger.debug('task_id : {}'.format(task_id))
+
+ if task_id is None:
+ logger.error("failed to retrieve task_id")
+ exit(-1)
+
+ """ check for result directory and create it otherwise """
+ report_path = "./results"
+ if not os.path.exists(report_path):
+ logger.debug('does not exists, we create it'.format(report_path))
+ os.makedirs(report_path)
+
+ """ write html report file """
+ report_file_name = '{}/opnfv-{}-{}.html'.format(report_path, test_name, test_date)
+ cmd_line = "rally task report %s --out %s" % (task_id, report_file_name)
+ logger.debug('running command line : {}'.format(cmd_line))
+ os.popen(cmd_line)
+
+ """ get and save rally operation JSON result """
+ cmd_line = "rally task results %s" % task_id
+ logger.debug('running command line : {}'.format(cmd_line))
+ cmd = os.popen(cmd_line)
+ json_results = cmd.read()
+ with open('{}/opnfv-{}-{}.json'.format(report_path, test_name, test_date), 'w') as f:
+ logger.debug('saving json file')
+ f.write(json_results)
+
+ """ parse JSON operation result """
+ if task_succeed(json_results):
+ print '{} OK'.format(test_date)
+ else:
+ print '{} KO'.format(test_date)
else:
- print "KO"
+ logger.error('{} test failed, unable to find a scenario test file'.format(test_name))
def retrieve_test_cases_file(test_name, tests_path):
"""
Retrieve from github the sample test files
- :return: void
+ :return: Boolean that indicates the retrieval status
"""
""" do not add the "/" at the end """
url_base = "https://git.opnfv.org/cgit/functest/plain/testcases/VIM/OpenStack/CI/suites"
test_file_name = 'opnfv-{}.json'.format(test_name)
- print 'fetching {}/{} ...'.format(url_base, test_file_name)
- response = urllib2.urlopen('{}/{}'.format(url_base, test_file_name))
+ logger.info('fetching {}/{} ...'.format(url_base, test_file_name))
+
+ try:
+ response = urllib2.urlopen('{}/{}'.format(url_base, test_file_name))
+ except (urllib2.HTTPError, urllib2.URLError):
+ return False
file_raw = response.read()
- """ check if the test path existe otherwise we create it """
+ """ check if the test path exist otherwise we create it """
if not os.path.exists(tests_path):
os.makedirs(tests_path)
- with open('{}/{}'.format(tests_path,test_file_name), 'w') as file:
- file.write(file_raw)
-
-
+ with open('{}/{}'.format(tests_path, test_file_name), 'w') as f:
+ f.write(file_raw)
+ return True
+
+
def main():
""" configure script """
- tests = ['authenticate','glance','heat','keystone','neutron','nova','tempest','vm', 'all'];
-
-
- if len(sys.argv) != 2:
- options = '{d[0]} | {d[1]} | {d[2]} | {d[3]} | {d[4]} | {d[5]} | {d[6]} | {d[7]} | {d[8]}'.format(d=tests)
- print "./run_rally [", options, "]"
+ if not (args.test_name in tests):
+ logger.error('argument not valid')
exit(-1)
- test_name = sys.argv[1]
-
- if not (test_name in tests):
- print "argument not valid"
- exit(-1)
-
- if test_name == "all":
+
+ if args.test_name == "all":
for test_name in tests:
if not (test_name == 'all' or test_name == 'tempest'):
print(test_name)
run_task(test_name)
-
else:
- run_task(test_name)
-
-
+ run_task(args.test_name)
+
if __name__ == '__main__':
main()
-