Linux – How to best transfter 1.8TB of individual files using LFTP

arch linuxftplinuxraspberry pi

I've stared mirroring the files and directories on an FTP server to my external HDD using a Raspberry Pi.

I used the following commands:

lftp user@adress.com

mirror --use-pget-n=8 /

The files are downloading, however it seems to be running quite slow. After about 10 hours of running, only 139GB has been downloaded. Whenever I ran the download using Filezilla on my desktop (running Linux Mint) I downloaded 200GB in 4 hours.

I was unaware of how many segments each file should be downloaded in, so I choose 8. I'm not entirely sure of the benefit splitting the file into segments however.

My download speed according to Speedtest.net peaks at around 40Mb/sec and is usually around 30Mb/sec.

Are there any parameters I could use to improve performance, or is it down the Raspberry Pi's hardware?

I would archive all the files and download them in one go, only I don't have any other access to the server.

Thanks.

Best Answer

The fastest way do download the files from that FTP server would be to run lftp on the server and download the files over the loopback interface.

Your question is imprecise (leave that aside for now).


Speedtest gives bits per second. 40 Mbit/s is 5 MByte. 5 MB/s * 3600 sec * 4 = 72k MB or roughly 70 Giga bytes. If your speed was 40 MB/s (320 Mbit/s), then you would have downloaded roughly 560 giga bytes in four hours. I don't think even the newest consumer grade hard drives can write that fast, so you would need an SSD drive. Are those even available in 500 or more gigs?

You are seeing a bottleneck of 47 MB/sec with the described configuration and the most likely source of that is (if not the hard drive itself) the USB connection of the external hard drive. However, you did not state that the drive was connected via USB. The RasPi could be downloading to a network drive, for all I know. In that case, you would still be limited by the USB 2 theoretical 60MB/s limit because the ethernet network on the RasPi is actually a USB adapter.

My comments are inconclusive, but I think it's reasonable to conclude that the RasPi hardware is to blame for the bottleneck.


File segmentation becomes more important when you are downloading over an unstable connection (or UDP). If your net does not frequently fall out while one file is being transferred, using application level segmentation will give you nothing but a few extra check sums. Actually, in all likelihood, you will not even see those.

Related Question