There is no way to do that for the moment. Actually, the volume UUID is used in each node of the chunk tree. You'll have to change them in there also assuming that the headers of the chunks/device are not hashed. BTRFS was really not design to allow this kind of backup.
This is really sad, but the easiest way to handle that is to use another computer.
If I may, I'd like to suggest you to stop backuping your data this way.
If your partitions are important as a whole, backup with dd
/clonezilla
. When you need to restore your backups, restore the whole partition at once. Don't do this kind of hybrid backups: you specifically saved your partitions at the bloc level. So you have to restore it at the bloc level. Otherwise, you are using a spoon to cut the meat. As you certainly noticed, this solution is usually not used because it offers no versatility.
If your data are important, backup with rsync
or a similar tool on another disk : your data will always be accessible, you backup exactly what you want, you are backuping at the file level, etc.
Note that BTRFS has some (now limited) backuping features. BTRFS is moving fast, I guess more backuping features will come out in the future.
Oh, you've been warned already ;) Automated Clonezilla backup and GPG encryption
BTW, encryption is easier to apply on files using either LUKS for partition-based encryption or EnFS or EncryptFS for file-based encryption.
Currently, btrfs does not support n-way mirrors.
Btrfs does have a special replace subcommand:
btrfs replace start /dev/left /dev/new_device /mnt/foo
Reading between the lines of the btrfs-replace
man page, this command should be able to use both existing legs - e.g. for situations where both legs have read errors - but both error sets are disjoint.
The btrfs replace command is executed in the background - you can check its status via the status
subcommand, e.g.:
btrfs replace status /mnt/foo
45.4% done, 0 write errs, 0 uncorr. read errs
Alternatively, one can also add a device to raid-1 filesytem and then delete an existing leg:
btrfs dev add /dev/mapper/new_device /mnt/foo
btrfs dev delete /dev/mapper/right /mnt/foo
The add
should return fast, since it justs adds the device (issue a btrfs fi show
to confirm).
The following delete
should trigger a balancing between the remaining devices such that each extend is available on each remaining device. Thus, the command is potentially very long running. This method also works to deal with the situation described in the question.
In comparison with btrfs replace
the add/delete cycle spams the syslog with low-level info messages. Also, it takes much longer to finish (e.g. 2-3 times longer, in my test system with 3 TB SATA drives, 80 % FS usage).
Finally, after the actual replacement, if the newer devices are larger than the original devices, you will need to issue a btrfs fi resize
on each device to utilize the entire disk space available. For the replace
example at the top, this looks like something like:
btrfs fi resize <devid>:max /mnt/foo
where devid
stands for the device id which btrfs fi show
returns.
Best Answer
Try
cd
'ing out of the emptydir and runninglsof +D /path/to/emptydir
on it to see what has it open. Depending what the directory is and how its used, perhaps something is opening and closing the directory very fast and you just happen to catch it when it doesn't have anything in it when running ls but does have something when runningrm -fr emptydir
. It shouldn't make any difference in this case, but try also runningrmdir emptydir
.The total number at the top of your
ls
output (I guess insgesamt meants total?) does indicate an empty directory.I think knowing the filesystem type may be helpful too. You probably also want to run
fsck
on it and see if that helps.