diff options
Diffstat (limited to 'docs/testing')
-rwxr-xr-x | docs/testing/developer/devguide/devguide.rst | 228 | ||||
-rwxr-xr-x | docs/testing/developer/devguide/devguide_nsb_prox.rst | 143 | ||||
-rw-r--r-- | docs/testing/developer/devguide/images/PROX_Grafana_1.png | bin | 0 -> 98429 bytes | |||
-rw-r--r-- | docs/testing/developer/devguide/images/PROX_Grafana_2.png | bin | 0 -> 61661 bytes | |||
-rw-r--r-- | docs/testing/developer/devguide/images/PROX_Grafana_3.png | bin | 0 -> 24975 bytes | |||
-rw-r--r-- | docs/testing/developer/devguide/images/PROX_Grafana_4.png | bin | 0 -> 67328 bytes | |||
-rw-r--r-- | docs/testing/developer/devguide/images/PROX_Test_BM_Script.png | bin | 76705 -> 86549 bytes | |||
-rw-r--r-- | docs/testing/developer/devguide/images/PROX_Test_HEAT_Script.png | bin | 90040 -> 0 bytes | |||
-rw-r--r-- | docs/testing/developer/devguide/images/PROX_Test_HEAT_Script1.png | bin | 0 -> 87627 bytes | |||
-rw-r--r-- | docs/testing/developer/devguide/images/PROX_Test_HEAT_Script2.png | bin | 0 -> 85248 bytes | |||
-rw-r--r-- | docs/testing/developer/devguide/index.rst | 1 | ||||
-rw-r--r-- | docs/testing/user/userguide/14-nsb-operation.rst | 114 |
12 files changed, 382 insertions, 104 deletions
diff --git a/docs/testing/developer/devguide/devguide.rst b/docs/testing/developer/devguide/devguide.rst index 04d5350be..dbe92b846 100755 --- a/docs/testing/developer/devguide/devguide.rst +++ b/docs/testing/developer/devguide/devguide.rst @@ -1,16 +1,42 @@ +.. + Licensed under the Apache License, Version 2.0 (the "License"); you may + not use this file except in compliance with the License. You may obtain + a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + License for the specific language governing permissions and limitations + under the License. + + Convention for heading levels in Yardstick documentation: + + ======= Heading 0 (reserved for the title in a document) + ------- Heading 1 + ~~~~~~~ Heading 2 + +++++++ Heading 3 + ''''''' Heading 4 + + Avoid deeper levels because they do not render well. + Introduction -============= +------------ -Yardstick is a project dealing with performance testing. Yardstick produces its own test cases but can also be considered as a framework to support feature project testing. +Yardstick is a project dealing with performance testing. Yardstick produces +its own test cases but can also be considered as a framework to support feature +project testing. -Yardstick developed a test API that can be used by any OPNFV project. Therefore there are many ways to contribute to Yardstick. +Yardstick developed a test API that can be used by any OPNFV project. Therefore +there are many ways to contribute to Yardstick. You can: * Develop new test cases * Review codes * Develop Yardstick API / framework -* Develop Yardstick grafana dashboards and Yardstick reporting page +* Develop Yardstick grafana dashboards and Yardstick reporting page * Write Yardstick documentation This developer guide describes how to interact with the Yardstick project. @@ -19,28 +45,30 @@ part is a list of “How to” to help you to join the Yardstick family whatever your field of interest is. Where can I find some help to start? --------------------------------------- +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. _`user guide`: http://artifacts.opnfv.org/yardstick/danube/1.0/docs/stesting_user_userguide/index.html .. _`wiki page`: https://wiki.opnfv.org/display/yardstick/ This guide is made for you. You can have a look at the `user guide`_. There are also references on documentation, video tutorials, tips in the -project `wiki page`_. You can also directly contact us by mail with [Yardstick] prefix in the title at opnfv-tech-discuss@lists.opnfv.org or on the IRC chan #opnfv-yardstick. +project `wiki page`_. You can also directly contact us by mail with [Yardstick] +prefix in the subject at opnfv-tech-discuss@lists.opnfv.org or on the IRC chan +#opnfv-yardstick. Yardstick developer areas -========================== +------------------------- Yardstick framework --------------------- +~~~~~~~~~~~~~~~~~~~ -Yardstick can be considered as a framework. Yardstick is release as a docker +Yardstick can be considered as a framework. Yardstick is released as a docker file, including tools, scripts and a CLI to prepare the environement and run -tests. It simplifies the integration of external test suites in CI pipeline -and provide commodity tools to collect and display results. +tests. It simplifies the integration of external test suites in CI pipelines +and provides commodity tools to collect and display results. -Since Danube, test categories also known as tiers have been created to group +Since Danube, test categories (also known as tiers) have been created to group similar tests, provide consistant sub-lists and at the end optimize test duration for CI (see How To section). @@ -56,44 +84,54 @@ The tiers are: How Todos? -=========== +---------- How Yardstick works? ---------------------- +~~~~~~~~~~~~~~~~~~~~ The installation and configuration of the Yardstick is described in the `user guide`_. How to work with test cases? ----------------------------- - +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -**Sample Test cases** +Sample Test cases ++++++++++++++++++ -Yardstick provides many sample test cases which are located at "samples" directory of repo. +Yardstick provides many sample test cases which are located at ``samples`` directory of repo. -Sample test cases are designed as following goals: +Sample test cases are designed with the following goals: -1. Helping user better understand yardstick features(including new feature and new test capacity). +1. Helping user better understand Yardstick features (including new feature and + new test capacity). -2. Helping developer to debug his new feature and test case before it is offical released. +2. Helping developer to debug a new feature and test case before it is + offically released. -3. Helping other developers understand and verify the new patch before the patch merged. +3. Helping other developers understand and verify the new patch before the + patch is merged. -So developers should upload your sample test case as well when they are trying to upload a new patch which is about the yardstick new test case or new feature. +Developers should upload their sample test cases as well when they are +uploading a new patch which is about the Yardstick new test case or new feature. -**OPNFV Release Test cases** +OPNFV Release Test cases +++++++++++++++++++++++++ -OPNFV Release test cases which are located at "tests/opnfv/test_cases" of repo. -those test cases are runing by OPNFV CI jobs, It means those test cases should be more mature than sample test cases. -OPNFV scenario owners can select related test cases and add them into the test suites which is represent the scenario. +OPNFV Release test cases are located at ``yardstick/tests/opnfv/test_cases``. +These test cases are run by OPNFV CI jobs, which means these test cases should +be more mature than sample test cases. +OPNFV scenario owners can select related test cases and add them into the test +suites which represent their scenario. -**Test case Description File** +Test case Description File +++++++++++++++++++++++++++ This section will introduce the meaning of the Test case description file. -we will use ping.yaml as a example to show you how to understand the test case description file. -In this Yaml file, you can easily find it consists of two sections. One is “Scenarios”, the other is “Context”.:: +we will use ping.yaml as a example to show you how to understand the test case +description file. +This ``yaml`` file consists of two sections. One is ``scenarios``, the other +is ``context``.:: --- # Sample benchmark task config file @@ -150,18 +188,32 @@ In this Yaml file, you can easily find it consists of two sections. One is “Sc {% endif %} -"Contexts" section is the description of pre-condition of testing. As ping.yaml shown, you can configure the image, flavor , name ,affinity and network of Test VM(servers), with this section, you will get a pre-condition env for Testing. -Yardstick will automatic setup the stack which are described in this section. -In fact, yardstick use convert this section to heat template and setup the VMs by heat-client (Meanwhile, yardstick can support to convert this section to Kubernetes template to setup containers). - -Two Test VMs(athena and ares) are configured by keyword "servers". -"flavor" will determine how many vCPU, how much memory for test VMs. -As "yardstick-flavor" is a basic flavor which will be automatically created when you run command "yardstick env prepare". "yardstick-flavor" is "1 vCPU 1G RAM,3G Disk". -"image" is the image name of test VMs. if you use cirros.3.5.0, you need fill the username of this image into "user". the "policy" of placement of Test VMs have two values (affinity and availability). -"availability" means anti-affinity. In "network" section, you can configure which provide network and physical_network you want Test VMs use. -you may need to configure segmentation_id when your network is vlan. - -Moreover, you can configure your specific flavor as below, yardstick will setup the stack for you. :: +The ``contexts`` section is the description of pre-condition of testing. As +``ping.yaml`` shows, you can configure the image, flavor, name, affinity and +network of Test VM (servers), with this section, you will get a pre-condition +env for Testing. +Yardstick will automatically setup the stack which are described in this +section. +Yardstick converts this section to heat template and sets up the VMs with +heat-client (Yardstick can also support to convert this section to Kubernetes +template to setup containers). + +In the examples above, two Test VMs (athena and ares) are configured by +keyword ``servers``. +``flavor`` will determine how many vCPU, how much memory for test VMs. +As ``yardstick-flavor`` is a basic flavor which will be automatically created +when you run command ``yardstick env prepare``. ``yardstick-flavor`` is +``1 vCPU 1G RAM,3G Disk``. +``image`` is the image name of test VMs. If you use ``cirros.3.5.0``, you need +fill the username of this image into ``user``. +The ``policy`` of placement of Test VMs have two values (``affinity`` and +``availability``). ``availability`` means anti-affinity. +In the ``network`` section, you can configure which ``provider`` network and +``physical_network`` you want Test VMs to use. +You may need to configure ``segmentation_id`` when your network is vlan. + +Moreover, you can configure your specific flavor as below, Yardstick will setup +the stack for you. :: flavor: name: yardstick-new-flavor @@ -170,7 +222,8 @@ Moreover, you can configure your specific flavor as below, yardstick will setup disk: 2 -Besides default heat stack, yardstick also allow you to setup other two types stack. they are "Node" and "Kubernetes". :: +Besides default ``Heat`` context, Yardstick also allows you to setup two other +types of context. They are ``Node`` and ``Kubernetes``. :: context: type: Kubernetes @@ -183,48 +236,64 @@ and :: name: LF +The ``scenarios`` section is the description of testing steps, you can +orchestrate the complex testing step through scenarios. -"Scenarios" section is the description of testing step, you can orchestrate the complex testing step through orchestrate scenarios. +Each scenario will do one testing step. +In one scenario, you can configure the type of scenario (operation), ``runner`` +type and ``sla`` of the scenario. -Each scenario will do one testing step, In one scenario, you can configure the type of scenario(operation), runner type and SLA of the scenario. +For TC002, We only have one step, which is Ping from host VM to target VM. In +this step, we also have some detailed operations implemented (such as ssh to +VM, ping from VM1 to VM2. Get the latency, verify the SLA, report the result). -For TC002, We only have one step , that is Ping from host VM to target VM. In this step, we also have some detail operation implement ( such as ssh to VM, ping from VM1 to VM2. Get the latency, verify the SLA, report the result). +If you want to get this implementation details implement, you can check with +the scenario.py file. For Ping scenario, you can find it in Yardstick repo +(``yardstick/yardstick/benchmark/scenarios/networking/ping.py``). -If you want to get this detail implement , you can check with the scenario.py file. For Ping scenario, you can find it in yardstick repo ( yardstick / yardstick / benchmark / scenarios / networking / ping.py) +After you select the type of scenario (such as Ping), you will select one type +of ``runner``, there are 4 types of runner. ``Iteration`` and ``Duration`` are +the most commonly used, and the default is ``Iteration``. -after you select the type of scenario( such as Ping), you will select one type of runner, there are 4 types of runner. Usually, we use the "Iteration" and "Duration". and Default is "Iteration". -For Iteration, you can specify the iteration number and interval of iteration. :: +For ``Iteration``, you can specify the iteration number and interval of iteration. :: runner: type: Iteration iterations: 10 interval: 1 -That means yardstick will iterate the 10 times of Ping test and the interval of each iteration is one second. +That means Yardstick will repeat the Ping test 10 times and the interval of +each iteration is one second. -For Duration, you can specify the duration of this scenario and the interval of each ping test. :: +For ``Duration``, you can specify the duration of this scenario and the +interval of each ping test. :: runner: type: Duration duration: 60 interval: 10 -That means yardstick will run the ping test as loop until the total time of this scenario reach the 60s and the interval of each loop is ten seconds. - +That means Yardstick will run the ping test as loop until the total time of +this scenario reaches 60s and the interval of each loop is ten seconds. -SLA is the criterion of this scenario. that depends on the scenario. different scenario can have different SLA metric. +SLA is the criterion of this scenario. This depends on the scenario. Different +scenarios can have different SLA metric. -**How to write a new test case** -Yardstick already provide a library of testing step. that means yardstick provide lots of type scenario. +How to write a new test case +++++++++++++++++++++++++++++ -Basiclly, What you need to do is to orchestrate the scenario from the library. +Yardstick already provides a library of testing steps (i.e. different types of +scenario). -Here, We will show two cases. One is how to write a simple test case, the other is how to write a quite complex test case. +Basically, what you need to do is to orchestrate the scenario from the library. +Here, we will show two cases. One is how to write a simple test case, the other +is how to write a quite complex test case. Write a new simple test case +'''''''''''''''''''''''''''' First, you can image a basic test case description as below. @@ -314,7 +383,7 @@ First, you can image a basic test case description as below. TODO How can I contribute to Yardstick? ------------------------------------ +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ If you are already a contributor of any OPNFV project, you can contribute to Yardstick. If you are totally new to OPNFV, you must first create your Linux @@ -329,7 +398,7 @@ We distinguish 2 levels of contributors: Yardstick commitors are promoted by the Yardstick contributors. Gerrit & JIRA introduction -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +++++++++++++++++++++++++++ .. _Gerrit: https://www.gerritcodereview.com/ .. _`OPNFV Gerrit`: http://gerrit.opnfv.org/ @@ -338,7 +407,8 @@ Gerrit & JIRA introduction OPNFV uses Gerrit_ for web based code review and repository management for the Git Version Control System. You can access `OPNFV Gerrit`_. Please note that -you need to have Linux Foundation ID in order to use OPNFV Gerrit. You can get one from this link_. +you need to have Linux Foundation ID in order to use OPNFV Gerrit. You can get +one from this link_. OPNFV uses JIRA_ for issue management. An important principle of change management is to have two-way trace-ability between issue management @@ -350,14 +420,16 @@ If you want to contribute to Yardstick, you can pick a issue from Yardstick's JIRA dashboard or you can create you own issue and submit it to JIRA. Install Git and Git-reviews -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ++++++++++++++++++++++++++++ Installing and configuring Git and Git-Review is necessary in order to submit -code to Gerrit. The `Getting to the code <https://wiki.opnfv.org/display/DEV/Developer+Getting+Started>`_ page will provide you with some help for that. +code to Gerrit. The +`Getting to the code <https://wiki.opnfv.org/display/DEV/Developer+Getting+Started>`_ +page will provide you with some help for that. Verify your patch locally before submitting -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ++++++++++++++++++++++++++++++++++++++++++++ Once you finish a patch, you can submit it to Gerrit for code review. A developer sends a new patch to Gerrit will trigger patch verify job on Jenkins @@ -366,7 +438,8 @@ code coverage test. Before you submit your patch, it is recommended to run the patch verification in your local environment first. Open a terminal window and set the project's directory to the working -directory using the ``cd`` command. Assume that ``YARDSTICK_REPO_DIR`` is the path to the Yardstick project folder on your computer:: +directory using the ``cd`` command. Assume that ``YARDSTICK_REPO_DIR`` is the +path to the Yardstick project folder on your computer:: cd $YARDSTICK_REPO_DIR @@ -377,7 +450,7 @@ Verify your patch:: It is used in CI but also by the CLI. Submit the code with Git -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +++++++++++++++++++++++++ Tell Git which files you would like to take into account for the next commit. This is called 'staging' the files, by placing them into the staging area, @@ -417,7 +490,7 @@ to the commits, and eventually navigate among the latter more easily. `This document`_ happened to be very clear and useful to get started with that. Push the code to Gerrit for review -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +++++++++++++++++++++++++++++++++++ Now that the code has been comitted into your local Git repository the following step is to push it online to Gerrit for it to be reviewed. The @@ -432,27 +505,27 @@ Yardstick committers and contributors to review your codes. :width: 800px :alt: Gerrit for code review -You can find a list Yardstick people `here <https://wiki.opnfv.org/display/yardstick/People>`_, -or use the ``yardstick-reviewers`` and ``yardstick-committers`` groups in gerrit. +You can find a list Yardstick people +`here <https://wiki.opnfv.org/display/yardstick/People>`_, or use the +``yardstick-reviewers`` and ``yardstick-committers`` groups in gerrit. Modify the code under review in Gerrit -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +++++++++++++++++++++++++++++++++++++++ At the same time the code is being reviewed in Gerrit, you may need to edit it to make some changes and then send it back for review. The following steps go through the procedure. Once you have modified/edited your code files under your IDE, you will have to -stage them. The 'status' command is very helpful at this point as it provides -an overview of Git's current state:: +stage them. The ``git status`` command is very helpful at this point as it +provides an overview of Git's current state:: git status -The output of the command provides us with the files that have been modified -after the latest commit. +This command lists the files that have been modified since the last commit. You can now stage the files that have been modified as part of the Gerrit code -review edition/modification/improvement using ``git add`` command. It is now +review addition/modification/improvement using ``git add`` command. It is now time to commit the newly modified files, but the objective here is not to create a new commit, we simply want to inject the new changes into the previous commit. You can achieve that with the '--amend' option on the @@ -469,7 +542,8 @@ The final step consists in pushing the newly modified commit to Gerrit:: Plugins -========== +------- -For information about Yardstick plugins, refer to the chapter **Installing a plug-in into Yardstick** in the `user guide`_. +For information about Yardstick plugins, refer to the chapter +**Installing a plug-in into Yardstick** in the `user guide`_. diff --git a/docs/testing/developer/devguide/devguide_nsb_prox.rst b/docs/testing/developer/devguide/devguide_nsb_prox.rst index 22628413b..79990055a 100755 --- a/docs/testing/developer/devguide/devguide_nsb_prox.rst +++ b/docs/testing/developer/devguide/devguide_nsb_prox.rst @@ -244,10 +244,13 @@ Now let's examine the components of the file in detail 3. ``nodes`` - This names the Traffic Generator and the System under Test. Does not need to change. -4. ``prox_path`` - Location of the Prox executable on the traffic +4. ``interface_speed_gbps`` - This is an optional parameter. If not present + the system defaults to 10Gbps. This defines the speed of the interfaces. + +5. ``prox_path`` - Location of the Prox executable on the traffic generator (Either baremetal or Openstack Virtual Machine) -5. ``prox_config`` - This is the ``SUT Config File``. +6. ``prox_config`` - This is the ``SUT Config File``. In this case it is ``handle_l2fwd-2.cfg`` A number of additional parameters can be added. This example @@ -285,16 +288,31 @@ Now let's examine the components of the file in detail of a file called ``parameters.lua``, which contains information retrieved from either the hardware or the openstack configuration. -6. ``prox_args`` - this specifies the command line arguments to start +7. ``prox_args`` - this specifies the command line arguments to start prox. See `prox command line`_. -7. ``prox_config`` - This specifies the Traffic Generator config file. +8. ``prox_config`` - This specifies the Traffic Generator config file. + +9. ``runner`` - This is set to ``ProxDuration`` - This specifies that the + test runs for a set duration. Other runner types are available + but it is recommend to use ``ProxDuration`` + + The following parrameters are supported + + ``interval`` - (optional) - This specifies the sampling interval. + Default is 1 sec + + ``sampled`` - (optional) - This specifies if sampling information is + required. Default ``no`` + + ``duration`` - This is the length of the test in seconds. Default + is 60 seconds. -8. ``runner`` - This is set to ``Duration`` - This specified that the - test run for a set duration. Other runner types are available - but it is recommend to use ``Duration`` + ``confirmation`` - This specifies the number of confirmation retests to + be made before deciding to increase or decrease line speed. Default 0. + +10. ``context`` - This is ``context`` for a 2 port Baremetal configuration. -9. ``context`` - This is ``context`` for a 2 port Baremetal configuration. If a 4 port configuration was required then file ``prox-baremetal-4.yaml`` would be used. This is the NSB Prox baremetal configuration file. @@ -304,7 +322,8 @@ Now let's examine the components of the file in detail *Traffic Profile file* ---------------------- -This describes the details of the traffic flow. In this case ``prox_binsearch.yaml`` is used. +This describes the details of the traffic flow. In this case +``prox_binsearch.yaml`` is used. .. image:: images/PROX_Traffic_profile.png :width: 800px @@ -326,21 +345,29 @@ This describes the details of the traffic flow. In this case ``prox_binsearch.ya Custom traffic types can be created by creating a new traffic profile class. -3. ``tolerated_loss`` - This specifies the percentage of packets that can be lost/dropped before - we declare success or failure. Success is Transmitted-Packets from Traffic Generator is greater than or equal to +3. ``tolerated_loss`` - This specifies the percentage of packets that + can be lost/dropped before + we declare success or failure. Success is Transmitted-Packets from + Traffic Generator is greater than or equal to packets received by Traffic Generator plus tolerated loss. -4. ``test_precision`` - This specifies the precision of the test results. For some tests the success criteria - may never be achieved because the test precision may be greater than the successful throughput. For finer - results increase the precision by making this value smaller. +4. ``test_precision`` - This specifies the precision of the test + results. For some tests the success criteria may never be + achieved because the test precision may be greater than the + successful throughput. For finer results increase the precision + by making this value smaller. -5. ``packet_sizes`` - This specifies the range of packets size this test is run for. +5. ``packet_sizes`` - This specifies the range of packets size this + test is run for. -6. ``duration`` - This specifies the sample duration that the test uses to check for success or failure. +6. ``duration`` - This specifies the sample duration that the test + uses to check for success or failure. -7. ``lower_bound`` - This specifies the test initial lower bound sample rate. On success this value is increased. +7. ``lower_bound`` - This specifies the test initial lower bound sample rate. + On success this value is increased. -8. ``upper_bound`` - This specifies the test initial upper bound sample rate. On success this value is decreased. +8. ``upper_bound`` - This specifies the test initial upper bound sample rate. + On success this value is decreased. Other traffic profiles exist eg prox_ACL.yaml which does not compare what is received with what is transmitted. It just @@ -371,14 +398,18 @@ See this prox_vpe.yaml as example:: We will use ``tc_prox_heat_context_l2fwd-2.yaml`` as a example to show you how to understand the test description file. -.. image:: images/PROX_Test_HEAT_Script.png +.. image:: images/PROX_Test_HEAT_Script1.png :width: 800px - :alt: NSB PROX Test Description File + :alt: NSB PROX Test Description File - Part 1 + +.. image:: images/PROX_Test_HEAT_Script2.png + :width: 800px + :alt: NSB PROX Test Description File - Part 2 Now lets examine the components of the file in detail -Sections 1 to 8 are exactly the same in Baremetal and in Heat. Section -``9`` is replaced with sections A to F. Section 9 was for a baremetal +Sections 1 to 9 are exactly the same in Baremetal and in Heat. Section +``10`` is replaced with sections A to F. Section 10 was for a baremetal configuration file. This has no place in a heat configuration. A. ``image`` - yardstick-samplevnfs. This is the name of the image @@ -418,12 +449,12 @@ F. ``networks`` - is composed of a management network labeled ``mgmt`` gateway_ip: 'null' port_security_enabled: False enable_dhcp: 'false' - downlink_1: + uplink_1: cidr: '10.0.4.0/24' gateway_ip: 'null' port_security_enabled: False enable_dhcp: 'false' - downlink_2: + downlink_1: cidr: '10.0.5.0/24' gateway_ip: 'null' port_security_enabled: False @@ -1033,7 +1064,7 @@ If PROX NSB does not work on baremetal, problem is either in network configurati 1. What is received on 0 is transmitted on 1, received on 1 transmitted on 0, received on 2 transmitted on 3 and received on 3 transmitted on 2. 2. No packets are Failed. - 3. No Packets are discarded. + 3. No packets are discarded. We can also dump the packets being received or transmitted via the following commands. :: @@ -1228,7 +1259,69 @@ Where 4) ir.intel.com = local no proxy +*How to Understand the Grafana output?* +--------------------------------------- + + .. image:: images/PROX_Grafana_1.png + :width: 1000px + :alt: NSB PROX Grafana_1 + + .. image:: images/PROX_Grafana_2.png + :width: 1000px + :alt: NSB PROX Grafana_2 + + .. image:: images/PROX_Grafana_3.png + :width: 1000px + :alt: NSB PROX Grafana_3 + + .. image:: images/PROX_Grafana_4.png + :width: 1000px + :alt: NSB PROX Grafana_4 + +A. Test Parameters - Test interval, Duartion, Tolerated Loss and Test Precision + +B. Overall No of packets send and received during test + +C. Generator Stats - packets sent, received and attempted by Generator + +D. Packets Size + +E. No of packets received by SUT + +F. No of packets forwarded by SUT + +G. This is the number of packets sent by the generator per port, for each interval. + +H. This is the number of packets received by the generator per port, for each interval. + +I. This is the number of packets send and received by the generator and lost by the SUT + that meet the success criteria + +J. This is the changes the Percentage of Line Rate used over a test, The MAX and the + MIN should converge to within the interval specified as the ``test-precision``. + +K. This is the packets Size supported during test. If "N/A" appears in any field the result has not been decided. + +L. This is the calculated throughput in MPPS(Million Packets Per second) for this line rate. + +M. This is the actual No, of packets sent by the generator in MPPS + +N. This is the actual No. of packets received by the generator in MPPS + +O. This is the total No. of packets sent by SUT. + +P. This is the total No. of packets received by the SUT + +Q. This is the total No. of packets dropped. (These packets were sent by the generator but not + received back by the generator, these may be dropped by the SUT or the Generator) + +R. This is the tolerated no of packets that can be dropped. + +S. This is the test Throughput in Gbps + +T. This is the Latencey per Port +U. This is the CPU Utilization diff --git a/docs/testing/developer/devguide/images/PROX_Grafana_1.png b/docs/testing/developer/devguide/images/PROX_Grafana_1.png Binary files differnew file mode 100644 index 000000000..d272edcf3 --- /dev/null +++ b/docs/testing/developer/devguide/images/PROX_Grafana_1.png diff --git a/docs/testing/developer/devguide/images/PROX_Grafana_2.png b/docs/testing/developer/devguide/images/PROX_Grafana_2.png Binary files differnew file mode 100644 index 000000000..4f7fd4cf5 --- /dev/null +++ b/docs/testing/developer/devguide/images/PROX_Grafana_2.png diff --git a/docs/testing/developer/devguide/images/PROX_Grafana_3.png b/docs/testing/developer/devguide/images/PROX_Grafana_3.png Binary files differnew file mode 100644 index 000000000..5ae967698 --- /dev/null +++ b/docs/testing/developer/devguide/images/PROX_Grafana_3.png diff --git a/docs/testing/developer/devguide/images/PROX_Grafana_4.png b/docs/testing/developer/devguide/images/PROX_Grafana_4.png Binary files differnew file mode 100644 index 000000000..5353d1c7e --- /dev/null +++ b/docs/testing/developer/devguide/images/PROX_Grafana_4.png diff --git a/docs/testing/developer/devguide/images/PROX_Test_BM_Script.png b/docs/testing/developer/devguide/images/PROX_Test_BM_Script.png Binary files differindex 32530eb15..c09f7bb1b 100644 --- a/docs/testing/developer/devguide/images/PROX_Test_BM_Script.png +++ b/docs/testing/developer/devguide/images/PROX_Test_BM_Script.png diff --git a/docs/testing/developer/devguide/images/PROX_Test_HEAT_Script.png b/docs/testing/developer/devguide/images/PROX_Test_HEAT_Script.png Binary files differdeleted file mode 100644 index 754973b4e..000000000 --- a/docs/testing/developer/devguide/images/PROX_Test_HEAT_Script.png +++ /dev/null diff --git a/docs/testing/developer/devguide/images/PROX_Test_HEAT_Script1.png b/docs/testing/developer/devguide/images/PROX_Test_HEAT_Script1.png Binary files differnew file mode 100644 index 000000000..bd375dba1 --- /dev/null +++ b/docs/testing/developer/devguide/images/PROX_Test_HEAT_Script1.png diff --git a/docs/testing/developer/devguide/images/PROX_Test_HEAT_Script2.png b/docs/testing/developer/devguide/images/PROX_Test_HEAT_Script2.png Binary files differnew file mode 100644 index 000000000..99d9d24e6 --- /dev/null +++ b/docs/testing/developer/devguide/images/PROX_Test_HEAT_Script2.png diff --git a/docs/testing/developer/devguide/index.rst b/docs/testing/developer/devguide/index.rst index 92a18f6ee..9a76a32f1 100644 --- a/docs/testing/developer/devguide/index.rst +++ b/docs/testing/developer/devguide/index.rst @@ -14,3 +14,4 @@ Yardstick Developer Guide :numbered: devguide + devguide_nsb_prox diff --git a/docs/testing/user/userguide/14-nsb-operation.rst b/docs/testing/user/userguide/14-nsb-operation.rst index d157914a9..a5f3a0cf6 100644 --- a/docs/testing/user/userguide/14-nsb-operation.rst +++ b/docs/testing/user/userguide/14-nsb-operation.rst @@ -84,6 +84,116 @@ In this example we have ``TRex xe0 <-> xe0 VNF xe1 <-> xe0 UDP_Replay`` downlink_0: - xe0 + +Availability zone +^^^^^^^^^^^^^^^^^ + +The configuration of the availability zone is requred in cases where location +of exact compute host/group of compute hosts needs to be specified for SampleVNF +or traffic generator in the heat test case. If this is the case, please follow +the instructions below. + +.. _`Create a host aggregate`: + +1. Create a host aggregate in the OpenStack and add the available compute hosts + into the aggregate group. + + .. note:: Change the ``<AZ_NAME>`` (availability zone name), ``<AGG_NAME>`` + (host aggregate name) and ``<HOST>`` (host name of one of the compute) in the + commands below. + + .. code-block:: bash + + # create host aggregate + openstack aggregate create --zone <AZ_NAME> --property availability_zone=<AZ_NAME> <AGG_NAME> + # show available hosts + openstack compute service list --service nova-compute + # add selected host into the host aggregate + openstack aggregate add host <AGG_NAME> <HOST> + +2. To specify the OpenStack location (the exact compute host or group of the hosts) + of SampleVNF or traffic generator in the heat test case, the ``availability_zone`` server + configuration option should be used. For example: + + .. note:: The ``<AZ_NAME>`` (availability zone name) should be changed according + to the name used during the host aggregate creation steps above. + + .. code-block:: yaml + + context: + name: yardstick + image: yardstick-samplevnfs + ... + servers: + vnf__0: + ... + availability_zone: <AZ_NAME> + ... + tg__0: + ... + availability_zone: <AZ_NAME> + ... + networks: + ... + +There are two example of SampleVNF scale out test case which use the availability zone +feature to specify the exact location of scaled VNFs and traffic generators. + +Those are: + +.. code-block:: console + + <repo>/samples/vnf_samples/nsut/prox/tc_prox_heat_context_l2fwd_multiflow-2-scale-out.yaml + <repo>/samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex_scale_out.yaml + +.. note:: This section describes the PROX scale-out testcase, but the same + procedure is used for the vFW test case. + +1. Before running the scale-out test case, make sure the host aggregates are + configured in the OpenStack environment. To check this, run the following + command: + + .. code-block:: console + + # show configured host aggregates (example) + openstack aggregate list + +----+------+-------------------+ + | ID | Name | Availability Zone | + +----+------+-------------------+ + | 4 | agg0 | AZ_NAME_0 | + | 5 | agg1 | AZ_NAME_1 | + +----+------+-------------------+ + +2. If no host aggregates are configured, please use `steps above`__ to + configure them. + +__ `Create a host aggregate`_ + + +3. Run the SampleVNF PROX scale-out test case, specifying the availability + zone of each VNF and traffic generator as a task arguments. + + .. note:: The ``az_0`` and ``az_1`` should be changed according to the host + aggregates created in the OpenStack. + + .. code-block:: console + + yardstick -d task start\ + <repo>/samples/vnf_samples/nsut/prox/tc_prox_heat_context_l2fwd_multiflow-2-scale-out.yaml\ + --task-args='{ + "num_vnfs": 4, "availability_zone": { + "vnf_0": "az_0", "tg_0": "az_1", + "vnf_1": "az_0", "tg_1": "az_1", + "vnf_2": "az_0", "tg_2": "az_1", + "vnf_3": "az_0", "tg_3": "az_1" + } + }' + + ``num_vnfs`` specifies how many VNFs are going to be deployed in the + ``heat`` contexts. ``vnf_X`` and ``tg_X`` arguments configure the + availability zone where the VNF and traffic generator is going to be deployed. + + Collectd KPIs ------------- @@ -154,7 +264,7 @@ We set the VCPUs and memory using the ``--task-args`` options .. code-block:: console - yardstick task start --task-args='{"mem": 10480, "vcpus": 4, "ports": 2}' \ + yardstick task start --task-args='{"mem": 10480, "vcpus": 4, "vports": 2}' \ samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex_scale-up.yaml In order to support ports scale-up, traffic and topology templates need to be used in testcase. @@ -242,7 +352,7 @@ Baremetal file: /etc/yardstick/nodes/pod.yaml Scale-Out --------------------- +--------- VNFs performance data with scale-out helps |