Benchmarking Virtual Switches in
OPNFVIntelIntelAT&T Labs200 Laurel Avenue SouthMiddletown,NJ07748USA+1 732 420 1571+1 732 368 1192acmorton@att.comhttp://home.comcast.net/~acmacm/This memo describes the progress of the Open Platform for NFV (OPNFV)
project on virtual switch performance "VSWITCHPERF". This project
intends to build on the current and completed work of the Benchmarking
Methodology Working Group in IETF, by referencing existing literature.
The Benchmarking Methodology Working Group has traditionally conducted
laboratory characterization of dedicated physical implementations of
internetworking functions. Therefore, this memo begins to describes the
additional considerations when virtual switches are implemented in
general-purpose hardware. The expanded tests and benchmarks are also
influenced by the OPNFV mission to support virtualization of the "telco"
infrastructure.The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in RFC 2119.Benchmarking Methodology Working Group (BMWG) has traditionally
conducted laboratory characterization of dedicated physical
implementations of internetworking functions. The Black-box Benchmarks
of Throughput, Latency, Forwarding Rates and others have served our
industry for many years. Now, Network Function Virtualization (NFV) has
the goal to transform how internetwork functions are implemented, and
therefore has garnered much attention.This memo describes the progress of the Open Platform for NFV (OPNFV)
project on virtual switch performance characterization, "VSWITCHPERF".
This project intends to build on the current and completed work of the
Benchmarking Methodology Working Group in IETF, by referencing existing
literature. For example, currently the most referenced RFC is (which depens on ) and
foundation of the benchmarking work in OPNFV is common and strong.See
https://wiki.opnfv.org/characterize_vswitch_performance_for_telco_nfv_use_cases
for more background, and the OPNFV website for general information:
https://www.opnfv.org/The authors note that OPNFV distinguishes itself from other open
source compute and networking projects through its emphasis on existing
"telco" services as opposed to cloud-computing. There are many ways in
which telco requirements have different emhasis on performance
dimensions when compared to cloud computing: support for and transfer of
isochronous media streams is one example.Note also that the move to NFV Infrastructure has resulted in many
new benchmarking initiatives across the industry, and the authors are
currently doing their best to maintain alignment with many other
projects, and this Internet Draft is evidence of the efforts.The primary purpose and scope of the memo is to inform BMWG of
work-in-progress that builds on the body of extensive literature and
experience. Additionally, once the initial information conveyed here is
received, this memo may be expanded to include more detail and
commentary from both BMWG and OPNFV communities, under BMWG's chartered
work to characterize the NFV Infrastructure (a virtual switch is an
important aspect of that infrastructure).This section highlights some specific considerations (from )related to Benchmarks for virtual
switches.To compare the performance of virtual designs and implementations
with their physical counterparts, identical benchmarks are needed.
BMWG has developed specifications for many network functions this memo
re-uses existing benchmarks through references, and expands them
during development of new methods. A key configuration aspect is the
number of parallel cores required to achieve comparable performance
with a given physical device, or whether some limit of scale was
reached before the cores could achieve the comparable level. It's unlikely that the virtual switch will be the only application
running on the SUT, so CPU utilization, Cache utilization, and Memory
footprint should also be recorded for the virtual implementations of
internetworking functions.External observations remain essential as the basis for Benchmarks.
Internal observations with fixed specification and interpretation will
be provided in parallel to assist the development of operations
procedures when the technology is deployed. The overall specification in preparation is referred to as a Level
Test Design (LTD) document, which will contain a suite of performace
tests.As one might expect, the most fundamental internetworking
characteristics of Throughput and Latency remain important when the
switch is virtualized, and these benchmarks figure prominently in the
specification.When considering characteristics important to "telco" network
functions, we must begin to consider additional performance metrics. In
this case, the project specifications have referenced metrics from the
IETF IP Performance Metrics (IPPM) literature. This means that the test of Latency is replaced by measurement of a
metric derived from IPPM's , where a set of
statistical summaries will be provided (mean, max, min, etc.). Further
metrics planned to be benchmarked include packet delay variation as
defined by , reordering, burst behaviour, DUT
availability, DUT capacity and packet loss in long term testing at
Throughput level, where some low-level of background loss may be present
and characterized.Tests have been (or will be) designed to collect the metrics
below:Throughput Tests to measure the maximum forwarding rate (in
frames per second or fps) and bit rate (in Mbps) for a constant load
(as defined by RFC1242) without traffic loss.Packet and Frame Delay Distribution Tests to measure average, min
and max packet and frame delay for constant loads.Packet Delay Tests to understand latency distribution for
different packet sizes and over an extended test run to uncover
outliers.Scalability Tests to understand how the virtual switch performs
as the number of flows, active ports, complexity of the forwarding
logic’s configuration… it has to deal with
increases.Stream Performance Tests (TCP, UDP) to measure bulk data transfer
performance, i.e. how fast systems can send and receive data through
the switch.Request/Response Performance Tests (TCP, UDP) the measure the
transaction rate through the switch.Control Path and Datapath Coupling Tests, to understand how
closely coupled the datapath and the control path are as well as the
effect of this coupling on the performance of the DUT (example:
delay of the initial packet of a flow).Noisy Neighbour Tests, to understand the effects of resource
sharing on the performance of a virtual switch.CPU and Memory Consumption Tests to understand the virtual
switch’s footprint on the system, usually conducted as
auxilliary measurements with benchmarks above. They include: CPU
utilization, Cache utilization and Memory footprint.The felixability of deployemnt of a virtual switch within a network
means that the BMWG IETF existing literature needs to be used to
characterize the performance of a switch in various deployment
scenarios. The deployment scenarios under consideration include:Benchmarking activities as described in this memo are limited to
technology characterization of a Device Under Test/System Under Test
(DUT/SUT) using controlled stimuli in a laboratory environment, with
dedicated address space and the constraints specified in the sections
above.The benchmarking network topology will be an independent test setup
and MUST NOT be connected to devices that may forward the test traffic
into a production network, or misroute traffic to the test management
network.Further, benchmarking is performed on a "black-box" basis, relying
solely on measurements observable external to the DUT/SUT.Special capabilities SHOULD NOT exist in the DUT/SUT specifically for
benchmarking purposes. Any implications for network security arising
from the DUT/SUT SHOULD be identical in the lab and in production
networks.No IANA Action is requested at this time.The authors acknowledgeNetwork Function Virtualization: Performance and Portability
Best Practices