Networking – iperf giving incorrect results

iperfnetworkingperformance

I've just had new broadband installed, and wanted to test its throughput using iperf3. However it appears to be giving considerably different results than more conventional speed tests.

E:\tmp> iperf3 -c 3.testdebit.info
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-10.01  sec  13.1 MBytes  11.0 Mbits/sec                  sender
[  4]   0.00-10.01  sec  13.1 MBytes  11.0 Mbits/sec                  receiver

Whereas online speed tests show the expected results of ~150 Mbits

3.testdebit.info has been tested from azure and is consistently around 330Mbits (though who knows what that means any more!)

I have tried various different servers, including a linux box hosted on azure – that delivers ~100Mbits to another azure box. This was also performed on port 80, to rule out any ISP throttling. All of those results are comparable.

Downloading a 3.5gb file in 210 seconds, works out to approximately 130Mbit

Can anyone shed any light on why iperf3 might be so low (or am I being really stupid and reading something wrong!)

These are all on the same computer, over ethernet so no wireless etc to get in the way.

edited to add

Performing the same test with iperf2 (on windows client (iPerf 2.0.5-3) and ubuntu (iperf version 2.0.5)) gives these results

E:\tmp\iperf2> iperf -c <hidden>.cloudapp.net -p 5201
------------------------------------------------------------
Client connecting to <hidden>.cloudapp.net, TCP port 5201
TCP window size: 63.0 KByte (default)
------------------------------------------------------------
[  3] local 192.168.0.2 port 51816 connected with <hidden> port 5201
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.1 sec  12.1 MBytes  10.0 Mbits/sec

The same performed from a linux based NAS

Nas:~# iperf3 -c 3.testdebit.info
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  14.5 MBytes  12.2 Mbits/sec    0             sender
[  4]   0.00-10.00  sec  14.5 MBytes  12.2 Mbits/sec                  receiver

And with the -R flag

E:\tmp> iperf3 -c 3.testdebit.info -R
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  58.0 MBytes  48.6 Mbits/sec    0             sender
[  4]   0.00-10.00  sec  57.0 MBytes  47.8 Mbits/sec                  receiver

To ensure it isn't a server issue, I have upgraded the azure VM to a size that now pulls 600Mbit up / 1Gbit down from the 3.testdebit.info server

In response to John Looker's answer

My main purpose of the question was trying to understand why iperf was giving such varied results. I understand that uploads are heavily shared, and aren't too concerned with that (or at least its a different question!)

The Azure servers I was using were in North and West Europe (Amsterdam and Ireland I believe) and with an online speedtest achieved in the area of 240Mb/s

It does however appear that multithreading was the issue, I have just rerun the test, using four threads instead of the default one –

E:\tmp>iperf3 -c 3.testdebit.info -R -P 5
Connecting to host 3.testdebit.info, port 5201
Reverse mode, remote host 3.testdebit.info is sending
- - - - - - - - - - - - - - - - - - - - - - - - -
[SUM]   0.00-10.00  sec   195 MBytes   163 Mbits/sec   50             sender
[SUM]   0.00-10.00  sec   190 MBytes   160 Mbits/sec                  receiver

Best Answer

  1. Good conventional speed tests are multi-threaded and create multiple connections to the speed test server. Thus maxing out your connection to its full potential.

http://www.thinkbroadband.com/faq/sections/flash-speed-test.html#324

  1. iPerf3 appears to only create two connections (using the default options), which may not be enough to max out your 152Mb broadband, particularly when congestion comes into play.

  2. Your download test also suggests multi-threaded connections.

Downloading a 3.5gb file in 210 seconds, works out to approximately 130Mbit

Your calculation is wrong however.

((3.5GB x 8bits x1024x1024x1024 ) / 210s ) / 1000000Mbit = 143Mb/s average.

An average speed of 143Mb/s is good for a download on the 152Mb tier.

While the 152Mb tier will max out at 161Mb/s burst download speed (your modem is over-profiled to guarantee speeds), average speeds will often be slightly lower due to several factors.

  • Rate limiting by the server.
  • TCP Receive Window needs time to ramp up speed.
  • Cable modem request-grant cycle.
  • Congestion at the node. You're sharing your cable connection (and therefore your downstream channels) with hundreds of other people. The 8 x 256 QAM downstream channels you have locked on your cable modem have a maximum usable bandwidth of 400Mb in total, coming from the node. This is shared between you and all the other users on your cable with the same channels as you. When other users are using their connection during your download, the speeds will naturally vary a bit.
  • Congestion on the route.
  • Congestion at the server.
  • Any packet loss and re-transmission.

Upstream bandwidth is highly contended with other users on your cable to the node.

If you have 2 x 16 QAM upstream channels locked, then you're sharing 2 x 17Mb = 34Mb with many other users. If you have 2 x 64 QAM upstream channels locked, then you're sharing 2 x 27Mb = 54Mb with many other users.

  1. Over long distances latency will become a factor in the speeds you can achieve.

You didn't state which Azure server you were using, whether UK, Europe or America.

Your iPerf3 server is in France and may or may not route through LINX, depending on your location. Congestion on the route could be a problem sometimes, once it leaves the VM network, particularly at peering points.

  1. Non-standard ports will often be treated as P2P traffic. http://www.thinkbroadband.com/faq/sections/flash-speed-test.html#323

Although there is no downstream traffic management on downloads, streaming, gaming and so on, on the 30Mb and above tiers, if your traffic is classed as P2P then it will be traffic managed and the speed reduced during peak hours.

The reason is that the upstream bandwidth is very scarce as it's shared by hundreds of users, and so any program that might swamp the upstream would be very bad for everybody on your cable. That's also why the upstream is still traffic managed.

Outside peak-time you should be able to max out your connection in any way you like.

  1. Beware tests that use small file sizes. There are a range of test files you can use here: http://www.thinkbroadband.com/download/

  2. Your download was unlikely to be delivered by a CDN or cached inside the VM network. When I was on 152Mb I regularly downloaded and streamed at 161Mb directly from servers. CDN's tend to make delivery slower rather than faster!

You need to provide further specifics on your testing strategy in order to answer the original question.

Related Question