Disk Testing using iperf3
The latest version of iperf3 now supports disk to disk testing.
For Disk to Disk testing, you would do this:
1st test memory to memory:
server: iperf3 -s
client: iperf3 -c testhost
Then disk to memory:
server: iperf3 -s
client: iperf3 -c testhost -F filename -t 60
Then memory to disk:
server: iperf3 -s -F filename
client: iperf3 -c testhost -t 60
The results of the slowest test will indicate the bottleneck. You can also run disk-to-disk tests using a combination of the above commands.
Note that if you run the test a second time the file will be cached, so you need to use a different file, or do this (as root):
sync; echo 3 > /proc/sys/vm/drop_caches
The tests will run till the end of the file, or till the end of the test duration, whichever comes first.
Sample Tests:
First try a memory to memory test:
server: iperf3 -s
client: iperf3 -c hostname -i1
[ ID] Interval Transfer Bandwidth Retransmits
[ 4] 0.00-1.00 sec 460 MBytes 3.86 Gbits/sec 0
[ 4] 1.00-2.00 sec 694 MBytes 5.82 Gbits/sec 0
[ 4] 2.00-3.00 sec 698 MBytes 5.85 Gbits/sec 0
[ 4] 3.00-4.00 sec 700 MBytes 5.87 Gbits/sec 0
[ 4] 4.00-5.00 sec 695 MBytes 5.83 Gbits/sec 0
[ 4] 5.00-6.00 sec 695 MBytes 5.83 Gbits/sec 0
[ 4] 6.00-7.00 sec 696 MBytes 5.84 Gbits/sec 0
[ 4] 7.00-8.00 sec 699 MBytes 5.86 Gbits/sec 0
[ 4] 8.00-9.00 sec 694 MBytes 5.82 Gbits/sec 0
[ 4] 9.00-10.00 sec 695 MBytes 5.83 Gbits/sec 0
next do a disk read test:
server: iperf3 -s
client: iperf3 -c hostname -i1 -F /storage/data/filename
[ ID] Interval Transfer Bandwidth Retransmits
[ 4] 0.00-1.16 sec 92.5 MBytes 669 Mbits/sec 0
[ 4] 1.16-2.00 sec 142 MBytes 1.42 Gbits/sec 0
[ 4] 2.00-3.00 sec 235 MBytes 1.97 Gbits/sec 0
[ 4] 3.00-4.00 sec 188 MBytes 1.57 Gbits/sec 0
[ 4] 4.00-5.02 sec 202 MBytes 1.66 Gbits/sec 0
[ 4] 5.02-6.21 sec 256 MBytes 1.81 Gbits/sec 0
[ 4] 6.21-7.00 sec 204 MBytes 2.15 Gbits/sec 0
[ 4] 7.00-8.00 sec 209 MBytes 1.75 Gbits/sec 0
[ 4] 8.00-9.14 sec 228 MBytes 1.67 Gbits/sec 0
[ 4] 9.14-10.15 sec 256 MBytes 2.13 Gbits/sec 0
This is slower than the above test, so we are disk limited, not network limited.
Next try memory to disk. Note that for disk write tests, you need to run a longer test to factor out network buffering issues.
server: iperf3 -s -F /storage/data/test.out
client: iperf3 -c hostname -i1 -t 40
[ ID] Interval Transfer Bandwidth Retransmits
[ 4] 0.00-1.00 sec 379 MBytes 3.18 Gbits/sec 0
[ 4] 1.00-2.00 sec 659 MBytes 5.53 Gbits/sec 0
[ 4] 2.00-3.00 sec 692 MBytes 5.81 Gbits/sec 0
[ 4] 3.00-4.00 sec 634 MBytes 5.32 Gbits/sec 0
[ 4] 4.00-5.00 sec 615 MBytes 5.16 Gbits/sec 0
[ 4] 5.00-6.00 sec 650 MBytes 5.45 Gbits/sec 0
[ 4] 6.00-7.00 sec 392 MBytes 3.29 Gbits/sec 0
[ 4] 7.00-8.00 sec 622 MBytes 5.22 Gbits/sec 0
[ 4] 8.00-9.00 sec 585 MBytes 4.91 Gbits/sec 0
[ 4] 9.00-10.00 sec 519 MBytes 4.35 Gbits/sec 0
[ 4] 10.00-11.00 sec 484 MBytes 4.06 Gbits/sec 0
...
[ 4] 30.00-31.00 sec 399 MBytes 3.34 Gbits/sec 0
[ 4] 31.00-32.00 sec 396 MBytes 3.32 Gbits/sec 0
[ 4] 32.00-33.00 sec 229 MBytes 1.92 Gbits/sec 0
[ 4] 33.00-34.00 sec 474 MBytes 3.97 Gbits/sec 0
[ 4] 34.00-35.00 sec 456 MBytes 3.83 Gbits/sec 0
[ 4] 35.00-36.00 sec 421 MBytes 3.53 Gbits/sec 0
[ 4] 36.00-37.00 sec 382 MBytes 3.21 Gbits/sec 0
[ 4] 37.00-38.00 sec 395 MBytes 3.31 Gbits/sec 0
[ 4] 38.00-39.00 sec 412 MBytes 3.46 Gbits/sec 0
[ 4] 39.00-40.00 sec 372 MBytes 3.12 Gbits/sec 0
Disk writes in this case is slightly faster than reading, but still slower than the network.