Menu

Throughput Tool Comparision

There are a number of open-source command line tools available for Unix that measure memory-to-memory network throughput. Some of the more popular tools include:

Each of these tools has slightly different features, and slightly different architectures, so you should not expect any one tool to have everything you need. Its best to be familiar with multiple tools, and use the right tool for your particular use case.

One key difference is whether or not the tool is single-threaded or multi-threaded. If you want to test parallel stream performance, you should use a multi-threaded tool such as iperf2.

Tool Feature Comparison
Feature iperf2.0.5 iperf2.0.13+ iperf3.7+ nuttcp 8.x
 multi-threading  -P  -P     
 JSON output      --json  
 CSV output -y -y    
 FQ-based pacing   --fq-rate   --fq-rate  
 multicast support  --ttl --ttl    -m
 bi-directional testing  --dualtest --dualtest  --bidir   
retransmit and CWND report    -e  on by default -br / -bc 
skip TCP slow start      --omit  
set TCP congestion control alg. -Z -Z --congestion  
zero-copy (sendfile)     --zerocopy  
UDP burst mode       -Ri#/#
Select CPU core     -A -xc#/#
MS Windows support  yes  yes no no

Note that all three of these tools are under active development, and the the list of unique features for a given tool will likely change over time. 

Based on our experience, we recommend the following:

  • Use iperf2 for parallel streams, bidirectional, or MS Windows-based tests
  • Use nuttcp for high-speed UDP testing
  • Use iperf3 otherwise.  In particular if you want detailed JSON output. Yes, we are a bit biased. :-)

For all 3 tools we recommend running the latest version, as all tools have been getting important updates every few months. In particular, there are known UDP issues with iperf2.0.5 and iperf3.1.4 or earlier, and these versions should be avoided for UDP testing.

Use of these tools in perfSONAR

perfSONAR's pScheduler tool can be used to run all of these tools. If you wish to use iperf2 on a CentOS6-based host, it is highly recommended to install iperf2.0.8 or later from source.

The basic commands for each tool are:

 pscheduler task --tool [tool name] throughput --dest receive_host --source send_host [tool options]

iperf3 is the default if --tool is not specified.

For the full list of options supported by pScheduler, run:

 pscheduler task throughput --help

Note that not all options are supported by all tools.

Sample UDP Test Details

We show some sample results for all 4 tools, doing 9Gbps UDP testing between 2 fast servers on a 10Gbps network.

For all tools, use the "-w2m" to increase the window size to reduce packet loss. Without that flag, all tools showed considerable loss.

From these results we find that iperf2.0.8, iperf3.1.7, and nuttcp8.1.4 all do well, but iperf2.0.5 can not even do 1Gbps. We also see that iperf3 is the least consistent in its sending rate.

>iperf3 -c hostname -u -b9G -w2m -i2
[ ID] Interval Transfer Bandwidth Total Datagrams
[ 4] 0.00-2.00 sec 2.07 GBytes 8.91 Gbits/sec 248878
[ 4] 2.00-4.00 sec 2.08 GBytes 8.91 Gbits/sec 249020
[ 4] 4.00-6.00 sec 2.11 GBytes 9.07 Gbits/sec 253434
[ 4] 6.00-8.00 sec 2.11 GBytes 9.08 Gbits/sec 253730
[ 4] 8.00-10.00 sec 2.03 GBytes 8.73 Gbits/sec 244013
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams
[ 4] 0.00-10.00 sec 10.4 GBytes 8.94 Gbits/sec 0.008 ms 0/1248863 (0%)
[ 4] Sent 1248863 datagrams
>iperf-2.0.9 -c hostname -u -b9G -w2m -i2
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 2.0 sec 2.10 GBytes 9.00 Gbits/sec
[ 3] 2.0- 4.0 sec 2.10 GBytes 9.00 Gbits/sec
[ 3] 4.0- 6.0 sec 2.10 GBytes 9.00 Gbits/sec
[ 3] 6.0- 8.0 sec 2.10 GBytes 9.00 Gbits/sec
[ 3] 0.0-10.0 sec 10.5 GBytes 9.00 Gbits/sec
[ 3] Sent 7653060 datagrams
[ 3] Server Report:
[ 3] 0.0-10.0 sec 10.4 GBytes 8.90 Gbits/sec 0.000 ms 58019/7653060 (0.76%)
>iperf-2.0.5 -c hostname -u -b9G -w2m -i2
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 2.0 sec 195 MBytes 817 Mbits/sec
[ 3] 2.0- 4.0 sec 194 MBytes 813 Mbits/sec
[ 3] 4.0- 6.0 sec 194 MBytes 813 Mbits/sec
[ 3] 6.0- 8.0 sec 194 MBytes 813 Mbits/sec
[ 3] 8.0-10.0 sec 195 MBytes 817 Mbits/sec
[ 3] 0.0-10.0 sec 971 MBytes 814 Mbits/sec
[ 3] Sent 692420 datagrams
[ 3] Server Report:
[ 3] 0.0-10.0 sec 971 MBytes 814 Mbits/sec 0.009 ms 0/692419 (0%)
[ 3] 0.0-10.0 sec 1 datagrams received out-of-order
>nuttcp -u -w2m -R9G -i2 server_name
2145.6562 MB / 2.00 sec = 8999.4941 Mbps 0 / 274644 ~drop/pkt 0.00 ~%loss
2145.7344 MB / 2.00 sec = 8999.8668 Mbps 0 / 274654 ~drop/pkt 0.00 ~%loss
2145.8125 MB / 2.00 sec = 9000.1945 Mbps 0 / 274664 ~drop/pkt 0.00 ~%loss
2145.7422 MB / 2.00 sec = 8999.8545 Mbps 0 / 274655 ~drop/pkt 0.00 ~%loss
2145.8125 MB / 2.00 sec = 9000.2305 Mbps 0 / 274664 ~drop/pkt 0.00 ~%loss
10728.8359 MB / 10.00 sec = 8999.9243 Mbps 100 %TX 29 %RX 0 / 1373

 

More Information