aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
-rw-r--r--README.rst7
-rw-r--r--docs/development/design/design.rst17
-rw-r--r--docs/development/design/traffic_desc.rst7
-rw-r--r--docs/development/design/versioning.rst2
-rw-r--r--docs/development/overview/overview.rst4
-rw-r--r--docs/release/release-notes/release-notes.rst17
-rw-r--r--docs/testing/user/userguide/advanced.rst39
-rw-r--r--docs/testing/user/userguide/extchains.rst77
-rw-r--r--docs/testing/user/userguide/images/nfvbench-ext-multi-vlans.pngbin0 -> 120315 bytes
-rw-r--r--docs/testing/user/userguide/images/nfvbench-ext-shared.pngbin0 -> 100743 bytes
-rw-r--r--docs/testing/user/userguide/index.rst4
-rw-r--r--docs/testing/user/userguide/readme.rst13
12 files changed, 115 insertions, 72 deletions
diff --git a/README.rst b/README.rst
index 0ed92c8..a3f67cf 100644
--- a/README.rst
+++ b/README.rst
@@ -5,11 +5,12 @@ The NFVbench tool provides an automated way to measure the network performance f
on any NFVi system viewed as a black box (NFVi Full Stack).
An NFVi full stack exposes the following interfaces:
- an OpenStack API for those NFVi platforms based on OpenStack
-- an interface to send and receive packets on the data plane (typically through top of rack switches)
+- an interface to send and receive packets on the data plane (typically through top of rack switches
+ while simpler direct wiring to a looping device would also work)
The NFVi full stack does not have to be supported by the OPNFV ecosystem and can be any functional OpenStack system that provides
the above interfaces.
-NFVbench can also be used without OpenStack on any bare system that can handle L2 forwarding or L3 routing.
+NFVbench can also be used without OpenStack on any networking device that can handle L2 forwarding or L3 routing.
NFVbench can be installed standalone (in the form of a single Docker container) and is fully functional without
the need to install any other OPNFV framework.
@@ -26,6 +27,6 @@ http://docs.opnfv.org/en/latest/submodules/nfvbench/docs/testing/user/userguide/
Contact Information
-------------------
-Inquiries and questions: send an email to opnfv-tech-discuss@lists.opnfv.org with a Subject line starting with "[nfvbench]".
+Inquiries and questions: send an email to opnfv-tech-discuss@lists.opnfv.org with a Subject line starting with "#nfvbench"
Open issues or submit an issue or enhancement request: https://jira.opnfv.org/projects/NFVBENCH/issues (this requires an OPNFV Linux Foundation login).
diff --git a/docs/development/design/design.rst b/docs/development/design/design.rst
index 6de6007..75b90f8 100644
--- a/docs/development/design/design.rst
+++ b/docs/development/design/design.rst
@@ -15,9 +15,11 @@ Introduction
NFVbench can be decomposed in the following components:
- Configuration
-- Staging
-- Traffic generation
-- Traffic generator results analysis
+- Orchestration:
+
+ - Staging
+ - Traffic generation
+ - Results analysis
Configuration
-------------
@@ -34,7 +36,7 @@ User configuration can come from:
- custom platform pluging
The precedence order for configuration is (from highest precedence to lowest precedence)
-- CLI confguration or REST configuration
+- CLI configuration or REST configuration
- custom platform plugin
- default configuration
@@ -43,6 +45,11 @@ with default platform options which can be either hardcoded or calculated at run
(such as platform deployment configuration files).
A custom platform plugin class is a child of the parent class nfvbench.config_plugin.ConfigPlugin.
+Orchestration
+-------------
+Once the configuration is settled, benchmark orchestration is managed by the ChainRunner class (nfvbench.chain_runner.ChainRunner).
+The chain runner will take care of orchestrating the staging, traffic generation and results analysis.
+
Staging
-------
@@ -57,7 +64,7 @@ Traffic Generation
The traffic generation component is in charge of contrilling the TRex traffic generator using its python API.
It includes tasks such as:
- traffic check end to end to make sure the packet path is clear in both directions before starting a benchmark
-- programming the Trex traffic flows based on requested parameters
+- programming the TRex traffic flows based on requested parameters
- fixed rate control
- NDR/PDR binary search
diff --git a/docs/development/design/traffic_desc.rst b/docs/development/design/traffic_desc.rst
index 2a40b6a..6442013 100644
--- a/docs/development/design/traffic_desc.rst
+++ b/docs/development/design/traffic_desc.rst
@@ -10,11 +10,10 @@ The general packet path model followed by NFVbench requires injecting traffic in
number of service chains, where each service chain is identified by 2 edge networks (left and right).
In the current multi-chaining model:
-- all service chains share the same left and right edge networks
-- each port associated to the traffic generator is dedicated to send traffic to one edge network
+- all service chains can either share the same left and right edge networks or can have their own edge networks
+- each port associated to the traffic generator is dedicated to send traffic to one side of the edge networks
-In an OpenStack deployment, this corresponds to all chains sharing the same 2 neutron networks.
-If VLAN encapsulation is used, all traffic sent to a port will have the same VLAN id.
+If VLAN encapsulation is used, all traffic sent to a port will either have the same VLAN id (shared networks) or distinct VLAN ids (dedicated egde networks)
Basic Packet Description
------------------------
diff --git a/docs/development/design/versioning.rst b/docs/development/design/versioning.rst
index 8103534..40e70f2 100644
--- a/docs/development/design/versioning.rst
+++ b/docs/development/design/versioning.rst
@@ -13,4 +13,4 @@ These git tags are applied indepently of the OPNFV release tags which are applie
In general it is recommeneded to always have a project git version tag associated to any OPNFV release tag content obtained from a sync from master.
-NFVbench Docker containers will be versioned based on the OPNF release tags or based on NFVbench project tags.
+NFVbench Docker containers will be versioned based on the NFVbench project tags.
diff --git a/docs/development/overview/overview.rst b/docs/development/overview/overview.rst
index 792d50f..26e19d1 100644
--- a/docs/development/overview/overview.rst
+++ b/docs/development/overview/overview.rst
@@ -12,10 +12,10 @@ Introduction
NFVbench is a python application that is designed to run in a compact and portable format inside a container and on production pods.
As such it only uses open sourec software with minimal hardware requirements (just a NIC card that is DPDK compatible).
Traffic generation is handled by TRex on 2 physical ports (2x10G or higher) forming traffic loops up to VNF level and following
-a path that is common to all NFV applications: external source to top of rack switch(es) to conpute node(s) to vswitch (if applicable)
+a path that is common to all NFV applications: external source to top of rack switch(es) to compute node(s) to vswitch (if applicable)
to VNF(s) and back.
-Configuration of benchmarks is through a hierarchy of yaml configuraton files and command line arguments.
+Configuration of benchmarks is through a yaml configuraton file and command line arguments.
Results are available in different formats:
- text output with tabular results
diff --git a/docs/release/release-notes/release-notes.rst b/docs/release/release-notes/release-notes.rst
index 655559d..6feeffe 100644
--- a/docs/release/release-notes/release-notes.rst
+++ b/docs/release/release-notes/release-notes.rst
@@ -7,20 +7,21 @@ RELEASE NOTES
Release 2.0
===========
+NFVbench will now follow its own project release numbering (x.y.z) which is independent of the OPNFV release numbering (opnfv-x.y.z)
+
Major release highlights:
-- Dedicated chain networks
-- VxLAN support with VTEP in the traffic generator
+- Dedicated edge networks for each chain
- Enhanced chain analysis
- Code refactoring and enhanced unit testing
- Miscellaneous enhancement
-Dedicated chain networks
-------------------------
-NFVbench 1.x only supported shared networks across chains.
-For example, 20xPVP would create only 2 networks (left and right) shared by all chains.
-With NFVbench 2.0, chain networks will become dedicated (unshared) by default with an option in
-the nfvbench configuration to shared them. A 20xPVP run will create 2x20 networks instead.
+Dedicated edge networks for each chain
+--------------------------------------
+NFVbench 1.x only supported shared edge networks for all chains.
+For example, 20xPVP would create only 2 edge networks (left and right) shared by all chains.
+With NFVbench 2.0, chain networks are dedicated (unshared) by default with an option in
+the nfvbench configuration to share them. A 20xPVP run will create 2x20 networks instead.
Enhanced chain analysis
-----------------------
diff --git a/docs/testing/user/userguide/advanced.rst b/docs/testing/user/userguide/advanced.rst
index 02c7fce..1d2ac36 100644
--- a/docs/testing/user/userguide/advanced.rst
+++ b/docs/testing/user/userguide/advanced.rst
@@ -201,44 +201,7 @@ For example to run NFVbench with 3 PVP chains:
It is not necessary to specify the service chain type (-sc) because PVP is set as default. The PVP service chains will have 3 VMs in 3 chains with this configuration.
If ``-sc PVVP`` is specified instead, there would be 6 VMs in 3 chains as this service chain has 2 VMs per chain.
-Both **single run** or **NDR/PDR** can be run as multichain. Running multichain is a scenario closer to a real life situation than runs with a single chain.
-
-
-External Chain
---------------
-
-NFVbench can measure the performance of 1 or more L3 service chains that are setup externally. Instead of being setup by NFVbench,
-the complete environment (VMs and networks) has to be setup prior to running NFVbench.
-
-Each external chain is made of 1 or more VNFs and has exactly 2 end network interfaces (left and right network interfaces) that are connected to 2 neutron networks (left and right networks).
-The internal composition of a multi-VNF service chain can be arbitrary (usually linear) as far as NFVbench is concerned,
-the only requirement is that the service chain can route L3 packets properly between the left and right networks.
-
-To run NFVbench on such external service chains:
-
-- explicitly tell NFVbench to use external service chain by adding ``-sc EXT`` or ``--service-chain EXT`` to NFVbench CLI options
-- specify the number of external chains using the ``-scc`` option (defaults to 1 chain)
-- specify the 2 end point networks of your environment in ``external_networks`` inside the config file.
- - The two networks specified there have to exist in Neutron and will be used as the end point networks by NFVbench ('napa' and 'marin' in the diagram below)
-- specify the router gateway IPs for the external service chains (1.1.0.2 and 2.2.0.2)
-- specify the traffic generator gateway IPs for the external service chains (1.1.0.102 and 2.2.0.102 in diagram below)
-- specify the packet source and destination IPs for the virtual devices that are simulated (10.0.0.0/8 and 20.0.0.0/8)
-
-
-.. image:: images/extchain-config.png
-
-L3 routing must be enabled in the VNF and configured to:
-
-- reply to ARP requests to its public IP addresses on both left and right networks
-- route packets from each set of remote devices toward the appropriate dest gateway IP in the traffic generator using 2 static routes (as illustrated in the diagram)
-
-Upon start, NFVbench will:
-- first retrieve the properties of the left and right networks using Neutron APIs,
-- extract the underlying network ID (typically VLAN segmentation ID),
-- generate packets with the proper VLAN ID and measure traffic.
-
-Note that in the case of multiple chains, all chains end interfaces must be connected to the same two left and right networks.
-The traffic will be load balanced across the corresponding gateway IP of these external service chains.
+Both **single run** or **NDR/PDR** can be run as multichain. Runnin multichain is a scenario closer to a real life situation than runs with a single chain.
Multiflow
diff --git a/docs/testing/user/userguide/extchains.rst b/docs/testing/user/userguide/extchains.rst
new file mode 100644
index 0000000..f7c0e51
--- /dev/null
+++ b/docs/testing/user/userguide/extchains.rst
@@ -0,0 +1,77 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. SPDX-License-Identifier: CC-BY-4.0
+.. (c) Cisco Systems, Inc
+
+===============
+External Chains
+===============
+
+NFVbench can measure the performance of 1 or more L3 service chains that are setup externally using OpenStack or without OpenStack.
+Instead of being setup by NFVbench, the complete environment (VNFs and networks) must be setup prior to running NFVbench.
+
+Each external chain is made of 1 or more VNFs and has exactly 2 edge network interfaces (left and right network interfaces)
+that are connected to 2 edge networks (left and right networks).
+The 2 edge networks for each chain can either be shared across all chains or can be independent.
+
+The internal composition of a multi-VNF service chain can be arbitrary (usually linear) as far as NFVbench is concerned,
+the only requirement is that the service chain can route L3 packets properly between the left and right networks.
+
+The network topology of the service chains is defined by the "service_chain_shared_net" option in the
+NFVbench configuration file.
+
+
+Shared Edge Networks
+--------------------
+
+This option is defined when "service_chain_shared_net" is set to true.
+All chains must share the same 2 edge networks and the VNF gateway IP addresses on each edge
+must all belong to the same subnet.
+
+.. image:: images/nfvbench-ext-shared.png
+
+The main advantage of this mode is that only 2 network segments are needed to support an arbitrary number of chains.
+
+
+Multi-VLAN Edge Networks
+------------------------
+
+This option is defined when "service_chain_shared_net" is set to false (default).
+Each chain has its own dedicated left and right network and there is no inter-chain constraint
+on the VNF IP addresses since they all belong to different network segments.
+
+.. image:: images/nfvbench-ext-multi-vlans.png
+
+The advantage of this mode is that the configuration of the VNFs can be made identical (same
+gateway IP addresses, same static routes).
+However this mode requires 2 network segments per chain.
+
+
+Detailed Example
+----------------
+To run NFVbench on an external service chains using shared edge networks:
+
+- tell NFVbench to use external service chain by adding "-sc EXT" or "--service-chain EXT" to NFVbench CLI options
+- specify the number of external chains using the "-scc" option (defaults to 1 chain)
+- if OpenStack is used:
+ - specify the name of the 2 edge networks in "external_networks" in the NFVbench configuration file
+ - The two networks specified have to exist in Neutron ('napa' and 'marin' in the diagram below)
+- if OpenStack is not used:
+ - specify the VLAN id to use for the 2 edge networks in "vlans" in the NFVbench configuration file
+- specify the VNF gateway IPs for the external service chains (1.1.0.2 and 2.2.0.2)
+- specify the traffic generator gateway IPs for the external service chains (1.1.0.102 and 2.2.0.102 in diagram below)
+- specify the packet source and destination IPs for the virtual devices that are simulated (10.0.0.0/8 and 20.0.0.0/8)
+
+.. image:: images/extchain-config.png
+
+L3 routing must be enabled in the VNF and configured to:
+
+- reply to ARP requests to its public IP addresses on both left and right networks
+- route packets from each set of remote devices toward the appropriate dest gateway IP in the traffic generator using 2 static routes (as illustrated in the diagram)
+
+Upon start, NFVbench will:
+- first retrieve the properties of the left and right networks using Neutron APIs,
+- extract the underlying network ID (typically VLAN segmentation ID),
+- generate packets with the proper VLAN ID and measure traffic.
+
+Note that in the case of multiple chains, all chains end interfaces must be connected to the same two left and right networks.
+The traffic will be load balanced across the corresponding gateway IP of these external service chains.
diff --git a/docs/testing/user/userguide/images/nfvbench-ext-multi-vlans.png b/docs/testing/user/userguide/images/nfvbench-ext-multi-vlans.png
new file mode 100644
index 0000000..2ef2300
--- /dev/null
+++ b/docs/testing/user/userguide/images/nfvbench-ext-multi-vlans.png
Binary files differ
diff --git a/docs/testing/user/userguide/images/nfvbench-ext-shared.png b/docs/testing/user/userguide/images/nfvbench-ext-shared.png
new file mode 100644
index 0000000..efe1c71
--- /dev/null
+++ b/docs/testing/user/userguide/images/nfvbench-ext-shared.png
Binary files differ
diff --git a/docs/testing/user/userguide/index.rst b/docs/testing/user/userguide/index.rst
index c7c57c8..e83912f 100644
--- a/docs/testing/user/userguide/index.rst
+++ b/docs/testing/user/userguide/index.rst
@@ -24,10 +24,8 @@ Table of Content
installation
examples
advanced
+ extchains
fluentd
sriov
server
faq
-
-
-
diff --git a/docs/testing/user/userguide/readme.rst b/docs/testing/user/userguide/readme.rst
index b437ff9..9915653 100644
--- a/docs/testing/user/userguide/readme.rst
+++ b/docs/testing/user/userguide/readme.rst
@@ -113,14 +113,10 @@ PVVP Packet Path
^^^^^^^^^^^^^^^^
This packet path represents a single service chain with 2 loopback VNFs in sequence and 3 Neutron networks.
-The 2 VNFs can run on the same compute node (PVVP intra-node):
+The 2 VNFs will only run on the same compute node (PVVP intra-node):
.. image:: images/nfvbench-pvvp.png
-or on different compute nodes (PVVP inter-node) based on a configuration option:
-
-.. image:: images/nfvbench-pvvp2.png
-
Multi-Chaining (N*PVP or N*PVVP)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -135,8 +131,9 @@ Example of multi-chaining with 2 concurrent PVP service chains:
This innovative feature will allow to measure easily the performance of a fully loaded compute node running multiple service chains.
-Multi-chaining is currently limited to 1 compute node (PVP or PVVP intra-node) or 2 compute nodes (for PVVP inter-node).
-The 2 edge interfaces for all service chains will share the same 2 networks.
+Multi-chaining is currently limited to 1 compute node (VMs run on the same compute node).
+The 2 edge interfaces for all service chains can either share the same 2 networks or can use
+dedicated networks (based on a configuration option).
The total traffic will be split equally across all chains.
@@ -195,4 +192,4 @@ NFVbench is agnostic of the virtual switch implementation and has been tested wi
Limitations
***********
NFVbench only supports VLAN with OpenStack.
-NFVbench does not support VxLAN overlays.
+VxLAN overlays is planned for a coming release.