I will just provide a short answer because I think this is being overthought.
If you read the main kernel wiki about the btrfs (sub-)commands, you will find that there are two commands for:
- making a "backup":
btrfs-send
- and restore:
btrfs-restore
Just in case, this means that it is not (designed to be) a backup, but to be an snapshot filesystem, with the idea of rolling back if needed, not as a backup but as "flexible".
Therefore — no, do not use it as backup — use it as a versioned filesystem where you can test things and go back. Don't rely on it.
For a general overview, see Resolving Problems with ZFS, most interesting part:
The second section of the configuration output displays error statistics. These errors are divided into three categories:
- READ – I/O errors that occurred while issuing a read request
- WRITE – I/O errors that occurred while issuing a write request
- CKSUM – Checksum errors, meaning that the device returned corrupted data as the result of a read request
These errors can be used to determine if the damage is permanent. A small number of I/O errors might indicate a temporary outage, while a large number might indicate a permanent problem with the device. These errors do not necessarily correspond to data corruption as interpreted by applications. If the device is in a redundant configuration, the devices might show uncorrectable errors, while no errors appear at the mirror or RAID-Z device level. In such cases, ZFS successfully retrieved the good data and attempted to heal the damaged data from existing replicas.
Now, for your questions:
First, what does "device" mean in this context? Are they talking about a physical device, the vdev or even something else? My assumption is that they are talking about every "device" in the hierarchy. The vdev error count then probably is the sum of the error counts of its physical devices, and the pool error count probably is the sum of the error counts of its vdevs. Is this correct?
Each device is checked independently and all its own errors are summed up. If such an error is present on both mirrors or the vdev is not redundant itself, it propagates upwards. So, in other words, it is the amount of the errors affecting the vdev itself (which is also in line with the logic of displaying each line separately).
But what I am really interested in is whether there have been checksum errors at ZFS level (and not hardware level). I am currently convinced that CKSUM is showing the latter (otherwise, it wouldn't make much sense), but I'd like to know for sure.
Yes, it is the hardware side (non-permanent stuff like faulty cables, suddenly removed disks, power loss etc). I think that is also perspective: faults at the "software side" would mean bugs in ZFS itself, so unwanted behavior that has not been checked for (assuming all normal user interactions are deemed correct) and that is not recognizable by ZFS itself. Fortunately, they are quite rare nowadays. Unfortunately, they are also quite severe much of the time.
Third, assuming the checksum errors they are talking about are indeed the checksum errors at the ZFS level (and not hardware level), why on earth do they only show the count of uncorrectable errors? This does not make any sense. We would like to see every checksum error, whether correctable or not, wouldn't we? After all, a checksum error means that there has been some sort of data corruption on the disk which has not been detected by hardware, so we probably want to change that disk before as soon as there is any error (even if the mirror disk can still act as "backup"). So I possibly did not understand yet what exactly they mean by "uncorrectable".
Faulty disks are already indicated by read/write errors (for example, URE from a disk). Checksum errors are what you are describing: a block was read, its return value was not deemed correct by the checksums of the blocks above it in the tree, so instead of returning it it was discarded and noted as an error. "Uncorrectable" is more or less definition, because if you get garbage and know that it is garbage, you cannot correct it, but you can ignore and not use it (or try again). The wording might be unnecessarily confusing, though.
According to that paragraph, there could be two sorts of errors: Data corruption errors and device errors. A mirror configuration of two disks is undoubtedly redundant, so (according to that paragraph) it is no data corruption error if ZFS encounters a checksum error on one of the disks (at the ZFS checksum level, not the hardware level). That means (once more according to that paragraph) that this error will not be recorded as part of the persistent error log.
Data corruption in this paragraph means some of your files are partly or completely destroyed, unreadable and you need to get your last backup as soon as possible and replace them. It is when all of ZFS' precautions have already failed and it cannot help you anymore (but at least it informs you about this now, not at the next server bootup checkdisk run).
For me, the main reason for switching to ZFS was its ability to detect silent bit rot on its own, i.e. to detect and report errors on devices even if those errors did not lead to I/O failures at the hardware / driver level. But not including such errors in the persistent log would mean losing them upon reboot, and that would be fatal (IMHO).
The idea behind ZFS systems is that they do not need to be taken down to find such errors, because the file system can be checked while online. Remember, 10 years ago this was a feature that was absent in most small-scale systems at the time. So the idea was that (on a redundant config of course) you can check read and write errors of the hardware and correct them by using good known copies. Additionally, you can scrub each month to read all data (because data not read cannot be known to be good) and correct any error you find.
It is like a big archive/library of old books: you have valuable and not so valuable books, some might decay over time, so you need a person that goes around each week or month and looks at all pages of all books for mold, bugs etc., and if he finds anything he tells you. If you have two identical libraries, he can go over to the other building, look at the same book at the same page and replace the destroyed page in the first library with a copy. If he would never check any book, he might be in for a nasty surprise 20 years later.
Best Answer
You should have a look at bup
bup supports bup-fsck (with par2)