A sector is marked pending when a read fails. The pending sector will be marked reallocated if a subsequent write fails. If the write succeeds, it is removed from current pending sectors and assumed to be ok. (The exact behavior could differ slightly and I'll go into that later, but this is a close enough approximation for now.)
When you run badblocks -w
, each pattern is first written, then read. It's possible that the write to the flaky sector succeeds but the subsequent read fails, which again adds it to the pending sector list. I would try writing zeroes to the entire disk with dd if=/dev/zero of=/dev/sda
, checking the SMART status, then reading the entire disk with dd if=/dev/sda of=/dev/null
and checking the SMART status again.
Update:
Based on your earlier results with badblocks -w
, I would have expected the pending sector to be cleared after writing the entire disk. But since that didn't happen, it's safe to say this disk is not behaving as expected.
Let's review the description of Current Pending Sector Count:
Count of "unstable" sectors (waiting to be remapped, because of
unrecoverable read errors). If an unstable sector is subsequently read
successfully, the sector is remapped and this value is decreased. Read
errors on a sector will not remap the sector immediately (since the
correct value cannot be read and so the value to remap is not known,
and also it might become readable later); instead, the drive firmware
remembers that the sector needs to be remapped, and will remap it the
next time it's written.[29] However some drives will not immediately
remap such sectors when written; instead the drive will first attempt
to write to the problem sector and if the write operation is
successful then the sector will be marked good (in this case, the
"Reallocation Event Count" (0xC4) will not be increased). This is a
serious shortcoming, for if such a drive contains marginal sectors
that consistently fail only after some time has passed following a
successful write operation, then the drive will never remap these
problem sectors.
Now let's review the important points:
...the drive firmware remembers that the sector needs to be remapped, and will remap it the next time it's written.[29] However some drives will not immediately remap such sectors when written; instead the drive will first attempt to write to the problem sector and if the write operation is successful then the sector will be marked good.
In other words, the pending sector should have either been remapped immediately, or the drive should have attempted to write to the sector and one of two things should have happened:
- The write failed, in which case the pending sector should have been remapped.
- The write succeeded, in which case the pending sector should have been cleared ("marked good").
I hinted at this earlier, but Wikipedia's description of Current Pending Sector suggests that the current pending sector count should always be zero after a full disk write. Since that is not the case here, we can conclude that either (a) Wikipedia is wrong (or at least incorrect for your drive), or (b) the drive's firmware cannot properly handle this error state (which I would consider a firmware bug).
If an unstable sector is subsequently read successfully, the sector is remapped and this value is decreased.
Since the current pending sector count is still unchanged after reading the entire drive, we can assert that either (a) the sector could not be successfully read or (b) the sector was successfully read and marked good, but there was an error reading a different sector. But since the reallocated sector count is still 0 after the read, we can exclude (b) as a possibility and can conclude that the pending sector was still unreadable.
At this point, it would be helpful to know if the drive has logged any new SMART errors. My next suggestion was going to be to check whether Seagate has a firmware update for your drive, but it looks like they don't.
Although I would recommend against continuing to use this drive, it sounds like you might be willing to accept the risks involved (namely, that it could continue to act erratically and/or could further degrade or fail catastrophically). In that case, you can try to install Linux, boot from a rescue CD, then (with the filesystems unmounted) use e2fsck -l filename to manually mark the appropriate block as bad. (Just make sure you maintain good backups!)
e2fsck -l filename
Add the block numbers listed in the file specified by filename to the
list of bad blocks. The format of this file is the same as the one
generated by the badblocks(8) program. Note that the block numbers are
based on the blocksize of the filesystem. Hence, badblocks(8) must be
given the blocksize of the filesystem in order to obtain correct
results. As a result, it is much simpler and safer to use the -c
option to e2fsck, since it will assure that the correct parameters are
passed to the badblocks program.
(Note that e2fsck -c
is preferred to e2fsck -l filename
, and you might even want to try it, but based on your results thus far, I highly doubt e2fsck -c will find any bad blocks.)
Of course, you'll have to do some arithmetic to convert the LBA of the faulty sector (as provided by SMART) into a filesystem block number. The Bad Blocks HowTo provides a handy formula:
b = (int)((L-S)*512/B)
where:
b = File System block number
B = File system block size in bytes
L = LBA of bad sector
S = Starting sector of partition as shown by fdisk -lu
and (int) denotes the integer part.
The HowTo also contains a complete example using this formula. After the OS is installed, you can confirm whether a file is occupying the flaky sector using debugfs (see the HowTo for detailed instructions).
Another option: partition around the suspected bad block
When you install your OS, you could also try to partition around the error. If I did my arithmetic right, the error is at around 81.589 MB, so can either make /boot a little small and start your next partition after sector 167095, or skip the first 82 MB or so completely.
ABRT 235018779
Unfortunately, as for the ABRT error at sector 235018779, we can only speculate, but the ATA8-ACS spec gives us some clues.
From Working Draft AT Attachment 8 - ATA/ATAPI Command Set (ATA8-ACS):
6.2.1 Abort (ABRT) Error bit 2. Abort shall be set to one if the command is not supported. Abort may be set to one if the device is not
able to complete the action requested by the command. Abort shall also
be set to one if an address outside of the range of user-accessible
addresses is requested if IDNF is not set to one.
Looking at the commands leading up to the ABRT (several READ SECTOR(S) followed by recalibration and reinitialization)...
Abort shall be set to one if the command is not supported. - This seems unlikely.
Abort may be set to one if the device is not able to complete the action requested by the command. - Maybe the P-list of reallocated sectors shifts the user-accessible addresses far enough that a user-accessible address translated to sector 235018779, and the read operation was not able to complete (for what reason, we don't know...but there wasn't a CRC error, so I don't think we can conclude that sector 235018779 is bad).
Abort shall also be set to one if an address outside of the range of user-accessible addresses is requested if IDNF is not set to one. - To me this seems most likely, and I would probably interpret it as the result of a software bug (either your OS or some program you were running). In that case, it is not a sign of impending doom for the hard drive.
Just in case you're not tired of running diagnostics yet...
You could try smartctl -t long /dev/sda
again to see if it produces any more errors in the SMART log, or you could leave this one as an unsolved X-file ;) and check the SMART log periodically to see whether it happens again. In any case, if you continue to use the drive without getting it to either reallocate or clear the pending sector, you're already taking a risk.
Use a checksumming filesystem
For a little more safety, you may want to consider using a checksumming filesystem such as ZFS or btrfs to help protect against low-level data corruption. And don't forget to perform frequent backups if you have anything that cannot be easily reproduced.
Best Answer
Modern hard drives spend their idle time quietly doing the following:
Scrubbing (scanning) for failed or failing sectors
Re-writing weak sectors to "strengthen" them
http://www.wdc.com/wdproducts/library/other/2579-850105.pdf
That's how your counts went up during idle time ;-)