I am having some problems with network performance speed on a Linux server running Ubuntu 9.10. Transfer speeds on all types of traffic are around 1.5MB/s on a 1000mbit/s wired ethernet connection. This server has achieved 55MB/s over samba in the recent past. I have not changed the hardware or network set-up. I do run updates on a regular basis and the latest and greatest from Ubuntu's repositories is running on this machine.
Hardware set-up
Desktop Windows PC – 1000 switch – 1000 switch – Linux server
All switches are netgear, and they all show a green light for their connections which means the connection is 1000mbit/s. The lights are yellow when the connection is only 100mbit/s. Other diagnostic information:
root@server:~# ifconfig
eth0 Link encap:Ethernet HWaddr 00:0c:6e:3e:ae:36
inet addr:192.168.1.30 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::20c:6eff:fe3e:ae36/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:28678 errors:0 dropped:0 overruns:0 frame:0
TX packets:73531 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:2109780 (2.1 MB) TX bytes:111039729 (111.0 MB)
Interrupt:22
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:113 errors:0 dropped:0 overruns:0 frame:0
TX packets:113 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:23469 (23.4 KB) TX bytes:23469 (23.4 KB)
root@server:~# ethtool eth0
Settings for eth0:
Supported ports: [ TP ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Advertised auto-negotiation: Yes
Speed: 1000Mb/s
Duplex: Full
Port: Twisted Pair
PHYAD: 0
Transceiver: internal
Auto-negotiation: on
Supports Wake-on: pg
Wake-on: g
Current message level: 0x00000037 (55)
Link detected: yes
root@server:~# mii-tool
eth0: negotiated 1000baseT-FD flow-control, link ok
The server thinks its got a 1000mbit/s connection. I have tested the speed of transfer by copying files using Samba. I have also used netcat (nc target 10000 < aBigFile) on the server to transfer to Windows (nc -l -p 10000) and saw similar levels of poor performance.
I tested the speed of the hard drives using hdparm and got:
root@server:~# hdparm -tT /dev/md0
/dev/md0:
Timing cached reads: 1436 MB in 2.00 seconds = 718.01 MB/sec
Timing buffered disk reads: 444 MB in 3.02 seconds = 147.24 MB/sec
Reading the same file for transfer using DD produced the following:
paul@server:/home/share/Series/New$ dd if=aBigFile of=/dev/null
3200369+1 records in
3200369+1 records out
1638589012 bytes (1.6 GB) copied, 12.7091 s, 129 MB/s
I am stumped. What could be causing the poor network performance which is 2 orders of magnitude lower than what the network is capable of?
Best Answer
In my professional experience, I've struggled to get good solid network performance with Samba on GNU/Linux. You mentioned you have achieved speeds of 55 MBps with it, which I believe, so I'm guessing something else is definitely at play.
However, have you tried NFS, FTP and SCP? Are the bandwidth issues consistent across the different protocols? If so, it's likely narrowed down to the physical connection. If you get inconsistent results, then it's likely a software problem.
Aside from testing the other protocols, are you using encryption on the transfer? For example, using
rsync -z
is sweet for enabling compression, but it's comes at a CPU cost, which severely impacts overall speed of the transfer. If usingSSH
withrsync
, then you have encryption on top of compression, and your CPU will be under a bit of stress, causing severe speed penalties.