If you store the snapshots in files, as opposed to in the file system (e.g.
with zfs receive
), I'm afraid, this is not possible.
ZFS on the receiving side
If you use ZFS on the sending and on the receiving side you can avoid having
to transfer the whole snapshot and only transfer the differences of the
snapshot compared to the previous one:
ssh myserver 'zfs send -i pool/dataset@2014-02-04 pool/dataset@2014-02-05' | \
zfs receive
ZFS knows about the snapshots and stores mutual blocks only once. Having the
file system understand the snapshots enables you to delete the old ones
without problems.
Other file system on the receiving side
In your case you store the snapshots in individual files, and your file system
is unaware of the snapshots. As you already noticed, this breaks rotation. You
either have to transmit entire snapshots, which will waste bandwidth and storage
space, but enables you to delete individual snapshots. They don't depend on
each other. You can do incremental snapshots like this:
ssh myserver 'zfs send -i pool/dataset@2014-02-04 pool/dataset@2014-02-05' \
> incremental-2014-02-04:05
To restore an incremental snapshot you need the previous snapshots as well.
This means you can't delete the old incrementals.
Possible solutions
You could do incrementals as shown in my last example and do a new
non-incremental every month. The new incrementals depend on this
non-incremental and you're free to delete the old snapshots.
Or you could look into other backup solutions. There is
rsnapshot, which uses rsync
and hard links.
It does a very good job at rotation and is very bandwidth efficient, since it
requires a full backup only once.
Then there is bareos. It does incrementals, which
are bandwith- and space-saving. It has a very nice feature; it can calculate a
full backup from a set of incrementals. This enables you to delete old
incrementals. But it's a rather complex system and intended for larger setups.
The best solution, however, is to use ZFS on the receiving side. It will be
bandwidth efficient, storage efficient and much faster than the other
solutions. The only really drawback I can think of is that you should have a
minimum of 8 GiB ECC memory on that box (you might be fine with 4 GiB if you
don't run any services and only use it to zfs receive
).
You can't do exactly what you want.
Whenever you create a zfs send
stream, that stream is created as the delta between two snapshots. (That's the only way to do it as ZFS is currently implemented.) In order to apply that stream to a different dataset, the target dataset must contain the starting snapshot of the stream; if it doesn't, there is no common point of reference for the two. When you destroy the @snap0 snapshot on the source dataset, you create a situation that is impossible for ZFS to reconcile.
The way to do what you are asking is to keep one snapshot in common between both datasets at all times, and use that common snapshot as the starting point for the next send stream.
So, you might in step 1 create a snapshot @backup0, and then some time around step 6 create and use a snapshot @backup1 to use for updating the off-site backup. You then transfer the stream that is the delta between @backup0 and @backup1 (which will include all intermediate snapshots), then delete @backup0 but keep @backup1 (which becomes the new common denominator). Next time you refresh the backup, you might create @backup2 (instead of @backup1) and transfer the delta between @backup1 and @backup2 (instead of @backup0 and @backup1) followed by deleting @backup1 (instead of @backup0), and so on.
Best Answer
Yes, you can.