How can I use rsync (but neither rsnapshot nor rdiff-backup nor any other application) to create a differential backup of a directory located on my local drive to another directory located on that same local drive?
F. Hauri posted the following in an anwser to How to create a local backup?:
#!/bin/bash
backRepo=/media/mydisk
backSrce=/home/user
backDest=home
backCopy=copy
backCount=9
[ -d "$backRepo/$backDest" ] || mkdir "$backRepo/$backDest"
cd $backSrce || exit 1
rsync -ax --delete --exclude '*~' --exclude '.DStore' . "$backRepo/$backDest/."
cd $backRepo
[ -d "$backCopy.$backCount" ] && rm -fR "$backCopy.$backCount"
for ((i=$backCount;i--;));do
[ -d "$backCopy.$i" ] && mv "$backCopy.$i" "$backCopy.$((i+1))"
done
((i++))
cp -al $backDest $backCopy.$i
It seems like the above script is fairly close to what I want, but frankly despite spending about an hour studying Easy Automated Snapshot-Style Backups with Linux and Rsync I still only have a vague idea of how to make rsync do what I want.
Here's my use case:
I am editing a video locally on my machine. The sum of all of the hundreds of files associated with that video will be less than 5 gb (five gigabytes).
Currently, I use Grsync to back up my internal drive to an external USB drive. Although I actually figured out how to accomplish the identical task using rsync I prefer using Grsync because I merely need to launch it and then click on one button to backup my internal directory containing my video files to my external USB drive. The entire process is silky smooth.
Every few hours, I want a fairly smooth way to back up my the above-mentioned data associated with my video, to my Google Drive account. I don’t mind manually choosing to upload a folder to Google Drive. I actually sort of prefer having to do so because it would help me to ensure the backup was actually being accomplished.
Every few nights before I go to bed, I have been copying the entire folder containing the video files, which contains many gigs of data, up to my Google Drive account.
I prefer differential backups to incremental ones because in case I were to need to restore my data from Google Drive I would likely be able to do so manually without becoming confused.
Please keep in mind that I am certainly not a unix sys admin at a large corporation supporting hundreds of users. I am a merely one guy who wants an easy method, but not necessarily a completely automated method, to back up his data offsite every few hours in case of a catastrophic loss of data, which would be most likely due to the theft of my computer. I am almost certain rsync can do what I want. Therefore, I am reluctant to install another application.
Best Answer
Here ya go!
For example, my current directory has 3 8k files:
My full backup doesn't yet exist, let's call that directory full_bak
First we need a full backup from which to do differentials. I've copied the script to my $HOME/bin directory as test123.sh. When both args are the same, that's essentially performing a full backup.
script outputs
Now look at ../full_bak
Make some changes
Confirm there are differences:
Now create a differential
Look at differential having just the file thats different from the last full backup
Make another change
Check what's different
and see we have a new file that's not in our full backup, and a changed file from before.
Do another differential to another directory
and see the new differential has the 1st differential as well as the new file
Differential Backups
Here's a fullbackup wrapper using test123.sh:
Here's a differential script creating sub directories based on the hour: