If you are happy to keep a copy of the data on the intermediate machine then you could simply write a script that updated the local copy using server1 as a reference then updated the backup on server2 using the local copy as a reference:
#!/bin/sh
rsync user@server1:/path/to/stuff /path/to/loca/copy -a --delete --compress
rsync /path/to/loca/copy user@server2:/path/to/where/stuff/should/go -a --delete --compress
Using a simple script means you have he desired single command to do everything. This of course could be a security no-no if the data is sensitive (you, or others in your company, might not want a copy floating around on your laptop). If server1 is local to you then you could just delete the local copy afterwards (as it will be quick to reconstruct over the local LAN next time).
Constructing a tunnel so the servers can effectively talk to each other more directly should be possible like so:
- On server 2 make a copy of /bin/sh as /usr/local/bin/shforkeepalive. Use a symbolic link rather than a copy then you don;t have to update it after security updates that patch /bin/sh.
On server 2 create a script that does nothing but loop sleeping for a few seconds then echoing a small amount of text out, and have this use the now "copy" of sh:
#!/usr/local/bin/shforkeepalive
while [ "1" != "0" ]; do
echo Beep!
sleep 5
done
(the echo
probably isn't needed, as the session is not going to be idle for long enough to time-out even if SSHd is configured to ignore keep-alive packets from the ssh client)
Now you can write a script on your laptop that starts your reverse tunnel in the background, tells server1 to use rsync to perform the copy operation, then kills the reverse tunnel by killing the looping script (which will close the SSH session):
#!/bin/sh
ssh user@server2 -L2222:127.0.0.1:22 /usr/local/bin/keepalivesctipt &
ssh user@server1 -R2222:127.0.0.1:2222 rsync /path/to/stuff user@127.0.0.1:/destination/path/to/update -a --delete --compress -e 'ssh -p 2222'
ssh user@server2 killall shforkeepalive
The way this works:
- Line 1: standard "command to use to interpret this script" marker
- Line 2: start a SSH connection with reverse tunnel and run the keepalive script via it to keep it open. The trailing & tells bash to run this in the background so the next lines can run without waiting for it to finish
- Line 3: start a tunnel that will connect to the tunnel above so server1 can see server2, and run rsync to perform the copy/update over this arrangement
- Line 4: kill the keep-alive script once the rsync operation completes (and so the second SSH call returns), which will and the first ssh session.
This doesn't feel particularly clean, but it should work. I've not tested the above so you might need to tweak it. Making the rsync command a single line script on server1 may help by reducing any need to escape characters like the ' on the calling ssh command.
BTW: you say "don't ask" to why the two servers can not see each other directly, but there is often good reason for this. My home server and the server its online backups are held on can not login to each other (and have different passwords+keys for all users) - this means that if one of the two is hacked it can not be used as an easy route to hack the other so my online backups are safer (someone malicious deleting my data from live can't use its ability to update the backups to delete said backups, as it has no direct ability to touch the main backup site). Both servers can both connect to an intermediate server elsewhere - the live server is set to push its backups (via rsync) to the intermediate machine early in the morning and backup server is set (a while later to allow step one to complete) to connect and collect the updates (again via rsyc followed by a snapshotting step in order to maintain multiple ages of backup). This technique may be usable in your circumstance too, and if so I would recommend it as being a much cleaner way of doing things.
Edit: Merging my hack with Aaron's to avoid all the mucking about with copies of /bin/sh and a separate keep-alive script on server2, this script on your laptop should do the whole job:
#!/bin/sh
ssh user@server2 -L2222:127.0.0.1:22 sleep 60 &
pid=$!
trap "kill $pid" EXIT
ssh user@server1 -R2222:127.0.0.1:2222 rsync /path/to/stuff user@127.0.0.1:/destination/path/to/update -a --delete --compress -e 'ssh -p 2222'
As with the above, rsync is connecting to localhost:2222 which forwards down the tunnel to your laptop's localhost:2222 which forwards through the other tunnel to server2's localhost:22.
Edit 2: If you don't mind server1 having a key that allows it to authenticate with server2 directly (even though it can't see server2 without a tunnel) you can simplify further with:
#!/bin/sh
ssh user@server1 -R2222:123.123.123:22 rsync /path/to/stuff user@127.0.0.1:/destination/path/to/update -a --delete --compress -e 'ssh -p 2222'
where 123.123.123.123 is a public address for server2, which could be used as a copy+paste one-liner instead of a script.
Rsync will obviously be faster than scp if the target already contains some of the source files, since rsync only copies the differences. But I suspect your question was about doing a straightforward copy to an empty target.
You've passed the -z
option to rsync
; this turns on compression. If the network bandwidth is the limiting factor (it often is), compression can improve the transfer speed by a noticeable amount.
You can also enable compression with scp
by passing the -C
option. This should about even things out with rsync. Compression is not enabled by default in ssh because it saves bandwidth but adds latency and CPU overhead; latency is bad for interactive sessions (this doesn't apply to scp
), and the CPU overhead is useless if the files you're copying are already compressed.
Older versions of rsync
used rsh rather than ssh as the default transport layer, so a fair comparison would be between rsync
and rcp
. But ssh has been the default since 2.6.0 released on 2004-01-01.
With identical compression settings, I'd expect rsync
and scp
to have essentially the same speed. Please share benchmarks if you find otherwise.
Best Answer
1) Performance
scp will be faster
2) Security
scp is more secure, but if you were to use rsync -avz -e ssh, then rsync would be as secure
3) Capability
rsync can 'sync' the two copies, so lets say if your scp stopped in middle of the transfer for some reason (network issue lets say), you could use rsync to complete the transfer. scp will simply overwrite.
rsync can also exclude certain subdirectories/files using the
--exclude
flag, scp can't do that.