Errors were detected and fixed with the scrub.
Before that, you didn't attempted any writes, just reads so everything was on the ARC (i.e. in cache on RAM) and disk corruption remained undetected.
I overlook the "0 errors". Here is a corrected explanation about what has likely happened:
You overwrote ~ 5 MB in the beginning of the disk with zeroes.
- The first 3.5 MB were harmless, ZFS reserves that area for non ZFS stuff so never read or write anything there.
- The next .5 MB overwrote two vdev labels (out of four)
- The next 1 MB was written in an area that might not been containing any data or metadata.
The vdev labels corruption went unnoticed due to their high redundancy (there were still six of them healthy) and the fact the labels are atomically overwritten anyway.
For a general overview, see Resolving Problems with ZFS, most interesting part:
The second section of the configuration output displays error statistics. These errors are divided into three categories:
- READ – I/O errors that occurred while issuing a read request
- WRITE – I/O errors that occurred while issuing a write request
- CKSUM – Checksum errors, meaning that the device returned corrupted data as the result of a read request
These errors can be used to determine if the damage is permanent. A small number of I/O errors might indicate a temporary outage, while a large number might indicate a permanent problem with the device. These errors do not necessarily correspond to data corruption as interpreted by applications. If the device is in a redundant configuration, the devices might show uncorrectable errors, while no errors appear at the mirror or RAID-Z device level. In such cases, ZFS successfully retrieved the good data and attempted to heal the damaged data from existing replicas.
Now, for your questions:
First, what does "device" mean in this context? Are they talking about a physical device, the vdev or even something else? My assumption is that they are talking about every "device" in the hierarchy. The vdev error count then probably is the sum of the error counts of its physical devices, and the pool error count probably is the sum of the error counts of its vdevs. Is this correct?
Each device is checked independently and all its own errors are summed up. If such an error is present on both mirrors or the vdev is not redundant itself, it propagates upwards. So, in other words, it is the amount of the errors affecting the vdev itself (which is also in line with the logic of displaying each line separately).
But what I am really interested in is whether there have been checksum errors at ZFS level (and not hardware level). I am currently convinced that CKSUM is showing the latter (otherwise, it wouldn't make much sense), but I'd like to know for sure.
Yes, it is the hardware side (non-permanent stuff like faulty cables, suddenly removed disks, power loss etc). I think that is also perspective: faults at the "software side" would mean bugs in ZFS itself, so unwanted behavior that has not been checked for (assuming all normal user interactions are deemed correct) and that is not recognizable by ZFS itself. Fortunately, they are quite rare nowadays. Unfortunately, they are also quite severe much of the time.
Third, assuming the checksum errors they are talking about are indeed the checksum errors at the ZFS level (and not hardware level), why on earth do they only show the count of uncorrectable errors? This does not make any sense. We would like to see every checksum error, whether correctable or not, wouldn't we? After all, a checksum error means that there has been some sort of data corruption on the disk which has not been detected by hardware, so we probably want to change that disk before as soon as there is any error (even if the mirror disk can still act as "backup"). So I possibly did not understand yet what exactly they mean by "uncorrectable".
Faulty disks are already indicated by read/write errors (for example, URE from a disk). Checksum errors are what you are describing: a block was read, its return value was not deemed correct by the checksums of the blocks above it in the tree, so instead of returning it it was discarded and noted as an error. "Uncorrectable" is more or less definition, because if you get garbage and know that it is garbage, you cannot correct it, but you can ignore and not use it (or try again). The wording might be unnecessarily confusing, though.
According to that paragraph, there could be two sorts of errors: Data corruption errors and device errors. A mirror configuration of two disks is undoubtedly redundant, so (according to that paragraph) it is no data corruption error if ZFS encounters a checksum error on one of the disks (at the ZFS checksum level, not the hardware level). That means (once more according to that paragraph) that this error will not be recorded as part of the persistent error log.
Data corruption in this paragraph means some of your files are partly or completely destroyed, unreadable and you need to get your last backup as soon as possible and replace them. It is when all of ZFS' precautions have already failed and it cannot help you anymore (but at least it informs you about this now, not at the next server bootup checkdisk run).
For me, the main reason for switching to ZFS was its ability to detect silent bit rot on its own, i.e. to detect and report errors on devices even if those errors did not lead to I/O failures at the hardware / driver level. But not including such errors in the persistent log would mean losing them upon reboot, and that would be fatal (IMHO).
The idea behind ZFS systems is that they do not need to be taken down to find such errors, because the file system can be checked while online. Remember, 10 years ago this was a feature that was absent in most small-scale systems at the time. So the idea was that (on a redundant config of course) you can check read and write errors of the hardware and correct them by using good known copies. Additionally, you can scrub each month to read all data (because data not read cannot be known to be good) and correct any error you find.
It is like a big archive/library of old books: you have valuable and not so valuable books, some might decay over time, so you need a person that goes around each week or month and looks at all pages of all books for mold, bugs etc., and if he finds anything he tells you. If you have two identical libraries, he can go over to the other building, look at the same book at the same page and replace the destroyed page in the first library with a copy. If he would never check any book, he might be in for a nasty surprise 20 years later.
Best Answer
Checksum verification happens on reads, and to read everything (except free space), you can scrub regularly. For Software RAID (mdadm), you can run
--action=check
and then see ifmismatch_cnt
is still 0.RAID only attempts to fix read errors (by re-writing data); for mismatched data, you have to determine manually whether it's relevant (free space or not) and if data or parity is correct.
Essentially with RAID you trust the storage to not misbehave and report errors properly instead of silently returning false data. RAID does not have checksums nor does it verify parity for every read.