mget
works with a glob for the "source file" portion of the arguments (at least in OpenSSH version 7.3):
sftp> ls *.pdf
foo.pdf bar.pdf
sftp> mget *.pdf
Fetching /home/jdoe/bar.pdf to bar.pdf
Fetching /home/jdoe/foo.pdf to foo.pdf
sftp>
You will instead need to loop over the files somehow and fetch them one-by-one if a glob get catches too many.
When you list the directory contents with the ls
command, it will sort the listing into alphanumeric order according to current locale's sorting rules by default. It is easy to assume that this is the "natural order" of things within the filesystem - but this isn't true.
Most filesystems don't sort their directories in any way: when adding a new file to a directory, the new file basically gets the first free slot in the directory's metadata structure. Sorting is only done when displaying the directory listing to the user. If a single directory has hundreds of thousands or millions of files in it, this sorting can actually require non-trivial amounts of memory and processing power.
When the order in which the files are processed does not matter, the most efficient way is to just read the directory metadata in order and process the files in the order encountered without any explicit sorting. In most cases this would mean the files will be processed basically in the order they were added to the directory, interspersed with newer files in cases where an old file was deleted and a later-added file reclaimed its metadata slot.
Some filesystems might use tree structures or something else in their internal design that might enforce a particular order for their directory entries as a side effect. But such an ordering might be based on inode numbers of the files or some other filesystem-internal detail, and so would not be guaranteed to be useful for humans for any practical purpose.
As @A.B said in the question comments, a find
command or a ls -f
or ls --sort=none
would list the files without any explicit sorting, in whatever order the filesystem stores its directories.
Best Answer
sftp
has limited capabilities. Nonetheless, theget
command has an option which may do the trick:get -a
completes partial downloads, so if a file is already present on the client and is at least as large as the file on the server, it won't be downloaded. If the file is present but shorter, the end of the file will be transferred, which makes sense if the local file is the product of an interrupted download.The easiest way to do complex things over SFTP is to use SSHFS. SSHFS is a filesystem that uses SFTP to make a remote filesystem appear as a local filessytem. On the client, SSHFS requires FUSE, which is available on most modern unices. On the server, SSHFS requires SFTP; if the server allows SFTP then you can use SSHFS with it.
Note that rsync over SSHFS can't take advantage of the delta transfer algorithm, because it's unable to compute partial checksums on the remote side. That's irrelevant for a one-time download but wasteful if you're synchronizing files that have been modified. For efficient synchronization of modified files, use
rsync -a server:/remote/path /local/path/
, but this requires SSH shell access, not just SFTP access. The shell access can be restricted to the rsync command though.