aboutsummaryrefslogtreecommitdiffstats
path: root/docs
diff options
context:
space:
mode:
authorMartin Kulhavy <martin.kulhavy@nokia.com>2017-06-30 15:21:29 +0300
committerMartin Kulhavy <martin.kulhavy@nokia.com>2017-07-01 14:13:17 +0300
commit0b3966412afeb23a98a955bfba1e8b461722ebed (patch)
treea07488d8556b6a843d4a6c4978f6e4bd75c72f9c /docs
parent9c969dc408c2100f0106d9eb6466b58c421a287e (diff)
Fix typos in docs
Fixed multiple typos and minor spelling errors found in the documentation. Change-Id: I102e3b7d3d421042dbef66f261e2183b0dfe24a8 Signed-off-by: Martin Kulhavy <martin.kulhavy@nokia.com>
Diffstat (limited to 'docs')
-rw-r--r--docs/testing/developer/devguide/index.rst36
-rw-r--r--docs/testing/user/configguide/configguide.rst6
-rw-r--r--docs/testing/user/userguide/index.rst8
3 files changed, 25 insertions, 25 deletions
diff --git a/docs/testing/developer/devguide/index.rst b/docs/testing/developer/devguide/index.rst
index 43f0804d7..551edec6f 100644
--- a/docs/testing/developer/devguide/index.rst
+++ b/docs/testing/developer/devguide/index.rst
@@ -81,7 +81,7 @@ The internal test cases in Danube are:
By internal, we mean that this particular test cases have been
developped and/or integrated by functest contributors and the associated
code is hosted in the Functest repository.
-An internal case can be fully developped or a simple integration of
+An internal case can be fully developed or a simple integration of
upstream suites (e.g. Tempest/Rally developped in OpenStack are just
integrated in Functest).
The structure of this repository is detailed in `[1]`_.
@@ -123,7 +123,7 @@ The external test cases are:
The code to run these test cases may be directly in the repository of
the project. We have also a **features** sub directory under opnfv_tests
-directory that may be used (it can be usefull if you want to reuse
+directory that may be used (it can be useful if you want to reuse
Functest library).
Functest framework
@@ -131,12 +131,12 @@ Functest framework
Functest can be considered as a framework.
Functest is release as a docker file, including tools, scripts and a CLI
-to prepare the environement and run tests.
+to prepare the environment and run tests.
It simplifies the integration of external test suites in CI pipeline
and provide commodity tools to collect and display results.
Since Colorado, test categories also known as tiers have been created to
-group similar tests, provide consistant sub-lists and at the end optimize
+group similar tests, provide consistent sub-lists and at the end optimize
test duration for CI (see How To section).
The definition of the tiers has been agreed by the testing working group.
@@ -212,7 +212,7 @@ functest/utils/
`-- openstack_utils.py
Note that for Openstack, keystone v3 is now deployed by default by compass,
-fuel and joid in Danube. All installers still support keysone v2 (deprecated in
+fuel and joid in Danube. All installers still support keystone v2 (deprecated in
next version).
Test collection framework
@@ -323,7 +323,7 @@ Please note that currently token authorization is implemented but is not yet ena
===================
An automatic reporting page has been created in order to provide a
- consistant view of the scenarios.
+ consistent view of the scenarios.
In this page, each scenario is evaluated according to test criteria.
The code for the automatic reporting is available at `[8]`_.
@@ -368,7 +368,7 @@ Please note that currently token authorization is implemented but is not yet ena
os-odl_l2-nofeature scenarios.
If no result is available or if all the results are failed, the test
case get 0 point.
- If it was succesfull at least once but not anymore during the 4 runs,
+ If it was successful at least once but not anymore during the 4 runs,
the case get 1 point (it worked once).
If at least 3 of the last 4 runs were successful, the case get 2 points.
If the last 4 runs of the test are successful, the test get 3 points.
@@ -400,7 +400,7 @@ Please note that currently token authorization is implemented but is not yet ena
Dashboard
=========
-Dashboard is used to provide a consistant view of the results collected
+Dashboard is used to provide a consistent view of the results collected
in CI.
The results showed on the dashboard are post processed from the Database,
which only contains raw results.
@@ -473,10 +473,10 @@ are identified but not covered yet by an existing testing project (e.g
security_scan before the creation of the security repository)
-How test constraints are defined?
+How are test constraints defined?
=================================
-Test constraints are defined according to 2 paramaters:
+Test constraints are defined according to 2 parameters:
* The scenario (DEPLOY_SCENARIO env variable)
* The installer (INSTALLER_TYPE env variable)
@@ -518,7 +518,7 @@ bgpvpn scenarios::
scenario: '(ocl)|(nosdn)|^(os-odl)((?!bgpvpn).)*$'
-How to write and check constaint regex?
+How to write and check constraint regex?
=======================================
Regex are standard regex. You can have a look at `[11]`_
@@ -534,7 +534,7 @@ How to know which test I can run?
You can use the API `[13]`_. The static declaration is in git `[5]`_
If you are in a Functest docker container (assuming that the
-environement has been prepared): just use the CLI.
+environment has been prepared): just use the CLI.
You can get the list per Test cases or by Tier::
@@ -692,7 +692,7 @@ e.g.::
This command will run all the test cases of the first 2 tiers, i.e.
healthcheck, connection_check, api_check, vping_ssh, vping_userdata,
-snaps_somke, tempest_smoke_serial and rally_sanity.
+snaps_smoke, tempest_smoke_serial and rally_sanity.
How to push your results into the Test Database
@@ -769,7 +769,7 @@ It can be described as follows::
Please note that each exclusion must be justified. the goal is not to exclude
test cases because they do not pass. Several scenarios reached the 100% criteria.
-So it is expected in the patch submited to exclude the cases to indicate the
+So it is expected in the patch submitted to exclude the cases to indicate the
reasons of the exclusion.
@@ -790,7 +790,7 @@ I have tests, to which category should I declare them?
CATEGORIES/TIERS description:
+----------------+-------------------------------------------------------------+
-| healthcheck | Simple OpenStack healtcheck tests case that validates the |
+| healthcheck | Simple OpenStack healthcheck tests case that validates the |
| | basic operations in OpenStack |
+----------------+-------------------------------------------------------------+
| Smoke | Set of smoke test cases/suites to validate the most common |
@@ -800,7 +800,7 @@ CATEGORIES/TIERS description:
| | Those come from Feature projects and need a bit of support |
| | for integration |
+----------------+-------------------------------------------------------------+
-| Components | Advanced Openstack tests: Full Tempest, Full Rally |
+| Components | Advanced OpenStack tests: Full Tempest, Full Rally |
+----------------+-------------------------------------------------------------+
| Performance | Out of Functest Scope |
+----------------+-------------------------------------------------------------+
@@ -816,7 +816,7 @@ We recommend to declare your test in the feature category.
VNF category is really dedicated to test including:
* creation of resources
- * deployement of an orchestrator/VNFM
+ * deployment of an orchestrator/VNFM
* deployment of the VNF
* test of the VNFM
* free resources
@@ -939,7 +939,7 @@ You can precise some configuration parameters in config_functest.yaml
Create your own VnfOnboarding file
-you must create your entry point through a python clase as referenced in the
+you must create your entry point through a python class as referenced in the
configuration file
e.g. aaa => creation of the file <Functest repo>/functest/opnfv_tests/vnf/aaa/aaa.py
diff --git a/docs/testing/user/configguide/configguide.rst b/docs/testing/user/configguide/configguide.rst
index 61fc45933..083bbf3a8 100644
--- a/docs/testing/user/configguide/configguide.rst
+++ b/docs/testing/user/configguide/configguide.rst
@@ -283,7 +283,7 @@ to attach the 'Up' status Functest container and start bash mode::
docker exec -it <Functest_Container_Name> bash
-4, Functest environemnt preparation and check
+4, Functest environment preparation and check
To see the Section below `Preparing the Functest environment`_.
@@ -417,7 +417,7 @@ We may distinguish several directories, the first level has 4 directories:
profile or any other test inputs that could be reused by any test
project.
* **docker**: This directory includes the needed files and tools to
- build the Funtest Docker image.
+ build the Functest Docker image.
* **docs**: This directory includes documentation: Release Notes,
User Guide, Configuration Guide and Developer Guide.
* **functest**: This directory contains all the code needed to run
@@ -513,7 +513,7 @@ This script will make sure that the requirements to run the tests are
met and will install the needed libraries and tools by all Functest
test cases. It should be run only once every time the Functest docker
container is started from scratch. If you try to run this command, on
-an already prepared enviroment, you will be prompted whether you really
+an already prepared environment, you will be prompted whether you really
want to continue or not::
functest env prepare
diff --git a/docs/testing/user/userguide/index.rst b/docs/testing/user/userguide/index.rst
index c1faecdaa..5268559bf 100644
--- a/docs/testing/user/userguide/index.rst
+++ b/docs/testing/user/userguide/index.rst
@@ -198,7 +198,7 @@ updates the appropriate parameters into the configuration file.
When the Tempest suite is executed, each test duration is measured and the full
console output is stored to a *log* file for further analysis.
-The Tempest testcases are distributed accross two
+The Tempest testcases are distributed across two
Tiers:
* Smoke Tier - Test Case 'tempest_smoke_serial'
@@ -239,7 +239,7 @@ The OPNFV Rally scenarios are based on the collection of the actual Rally scenar
A basic SLA (stop test on errors) has been implemented.
-The Rally testcases are distributed accross two Tiers:
+The Rally testcases are distributed across two Tiers:
* Smoke Tier - Test Case 'rally_sanity'
* Components Tier - Test case 'rally_full'
@@ -416,11 +416,11 @@ The list of tests can be described as follows:
* Delete operations
* Delete the port previously created via OpenStack
- * Check that the port has been also succesfully deleted in OpenDaylight
+ * Check that the port has been also successfully deleted in OpenDaylight
* Delete previously subnet created via OpenStack
* Check that the subnet has also been successfully deleted in OpenDaylight
* Delete the network created via OpenStack
- * Check that the network has also been succesfully deleted in OpenDaylight
+ * Check that the network has also been successfully deleted in OpenDaylight
Note: the checks in OpenDaylight are based on the returned HTTP status
code returned by OpenDaylight.