Given a hard disk with an encrypted Core Storage volume (but not the decryption password, because the backup service should not have access to that), how would one go about backing it up in a way that allows for pushing it to a cloud storage provider (like Amazon S3) and for incremental backups in the future (because you don't want to to push a full 1TB every day when only a couple of blocks have changed)?
How to backup an encrypted Core Storage volume off-site
backupcore-storageencryption
Related Question
- MacOS – Fixing a failed full disk Core Storage encryption
- How to turn off encrypted iPhone backup without a password
- How to mount Core Storage volume under Linux (or force conversion on unrevertible volume)
- Mac – Time Machine-equivalent cloud backup solution for macOS
- Mojave – Automount Encrypted Core Storage Volume on Mac 10.14.4+
- Mac – efficient duplication of Time Machine backups to remote location
- MacOS – FileVault – One account can unlock but preventing full login and forcing logout and login again with other user
- Fix Slow Encrypted Time Machine Backup on Encrypted USB Volume
Best Answer
Proposed solution:
You have an Amazon EC2 instance, with elastic block store large enough to hold the whole of the image you intend to backup:
Where:
or
Where:
Your initial backup would take a long while to sync, but if you employ rsync to sync, then you can eventually have the remote image catch up to your local image's changes.
Once it is caught up, you can then initiate an EBS snapshot on Amazon's side for the EBS volume containing your encrypted image.
Rinse and repeat for each backup period + snapshot you want to have backed up to the remote server, taking the following items/requirements into account:
With this, you will be able to do the incremental backups using Amazon's cloud technology.
S3 has some serious limitations, which would not suit your needs, for this particular purpose.
The EC2 instance, if fully backed by EBS, can be shut down when you are not doing a remote sync. Ie, when your backup kicks off, you can have it fire up the instance via Amazon's EC2 API, and get the dynamic name or IP address. Once it confirms it is up, it can kick off the rsync backup. When done, it can shutdown the remote image and initiate an Amazon EBS volume snapshot action.
Edit:
rsync does chunk/block level diffs for larger files. You can specify the size of the block diff:
You can also specify the data stream being sent to the remote server to be compressed, saving you on traffic.
Caveats about S3 vs EBS:
Unless the solution you employ supports splitting the single large file into segments and sending them in parallel, Amazon S3 throttles the data being sent down to under 400KB/sec after a certain size.
I employ rsync differential backups on my servers to S3 as compressed tarballs. Even at tarball sizes of about 500MB, S3 will throttle. In order to work around this, you need to split the file you are sending up into parts, otherwise, the backup to S3 will take forever.
Whereas an EC2 instance with EBS volumes will be faster and not require the need to split files, simplifying backup and restoration.