Question 1:
With regards to the -b
option: this depends on your disk. Modern, large disks have 4KB blocks, in which case you should set -b 4096
. You can get the block size from the operating system, and it's also usually obtainable by either reading the disk's information off of the label, or by googling the model number of the disk. If -b
is set to something larger than your block size, the integrity of badblocks
results can be compromised (i.e. you can get false-negatives: no bad blocks found when they may still exist). If -b
is set to something smaller than the block size of your drive, the speed of the badblocks
run can be compromised. I'm not sure, but there may be other problems with setting -b
to something smaller than your block size, since it isn't verifying the integrity of an entire block, it might still be possible to get false-negatives if it's set too small.
The -c
option corresponds to how many blocks should be checked at once. Batch reading/writing, basically. This option does not affect the integrity of your results, but it does affect the speed at which badblocks
runs. badblocks
will (optionally) write, then read, buffer, check, repeat for every N blocks as specified by -c
. If -c
is set too low, this will make your badblocks
runs take much longer than ordinary, as queueing and processing a separate IO request incurs overhead, and the disk might also impose additional overhead per-request. If -c
is set too high, badblocks
might run out of memory. If this happens, badblocks
will fail fairly quickly after it starts. Additional considerations here include parallel badblocks
runs: if you're running badblocks
against multiple partitions on the same disk (bad idea), or against multiple disks over the same IO channel, you'll probably want to tune -c
to something sensibly high given the memory available to badblocks
so that the parallel runs don't fight for IO bandwidth and can parallelize in a sane way.
Question 2:
Contrary to what other answers indicate, the -w
write-mode test is not more or less reliable than the non-destructive read-write test, but it is twice as fast, at the cost of being destructive to all of your data. I'll explain why:
In non-destructive mode, badblocks
does the following:
- Read existing data, checksum it (read again if necessary), and store it in memory.
- Write a predetermined pattern (overrideable with the
-p
option, though usually not necessary) to the block.
- Read the block back, verifying that the read data is the same as the pattern.
- Write the original data back to the disk.
- I'm not sure about this, but it also probably re-reads and verifies that the original data was written successfully and still checksums to the same thing.
In destructive (-w
) mode, badblocks
only does steps 2 and 3 above. This means that the number of read/write operations needed to verify data integrity is cut in half. If a block is bad, the data will be erroneous in either mode. Of course, if you care about the data that is stored on your drive, you should use non-destructive mode, as -w
will obliterate all data and leave badblocks
' patterns written to the disk instead.
Caveat: if a block is going bad, but isn't completely gone yet, some read/write verification pairs may work, and some may not. In this case, non-destructive mode may give you a more reliable indication of the "mushiness" of a block, since it does two sets of read/write verification (maybe--see the bullet under step 4). Even if non-destructive mode is more reliable in that way, it's only more reliable by coincidence. The correct way to check for blocks that aren't fully bad but can't sustain multiple read/write operations is to run badblocks
multiple times over the same data, using the-p
option.
Question 3:
If SMART is reallocating sectors, you should probably consider replacing the drive ASAP. Drives that lose a few sectors don't always keep losing them, but the cause is usually a heavily-used drive getting magnetically mushy, or failing heads/motors resulting in inaccurate or failed reads/writes. The final decision is up to you, of course: based on the value of the data on the drive and the reliability you need from the systems you run on it, you might decide to keep it up. I have some drives with known bad blocks that have been spinning with SMART warnings for years in my fileserver, but they're backed up on a schedule such that I could handle a total failure without much pain.
The password has to be set in the BIOS under the ATA-security extension. Usually there's a tab in the BIOS menu titled "Security". Authentication will occur at the BIOS level, so nothing this software "wizard" does has any bearing on setting up the authentication. It's unlikely that a BIOS update will enable HDD password if it wasn't previously supported.
To say that you're setting up the encryption is misleading. The thing is that the drive is ALWAYS encrypting every bit it writes to the chips. The disk controller does this automatically. Setting a HDD password(s) to the drive is what takes your security level from zero to pretty much unbreakable. Only a maliciously-planted hardware keylogger or an NSA-sprung remote BIOS exploit could retrieve the password to authenticate ;-) <-- I guess. I'm not sure what they can do to BIOS yet. The point is it's not totally insurmountable, but depending on how the key is stored on the drive, it's the most secure method of hard drive encryption currently available. That said, it's total overkill. BitLocker is probably sufficient for most consumer security needs.
When it comes to security, I guess the question is: How much do you want?
Hardware-based full disk encryption is several orders of magnitude more secure than software-level full disk encryption like TrueCrypt. It also has the added advantage of not impeding your SSD's performance. The way SSD's stow their bits can sometimes lead to problems with software solutions. Hardware-based FDE is just less messy and more elegant and secure of an option but it hasn't "caught on" even among those who care enough to encrypt their valuable data. It's not tricky to do at all but unfortunately many BIOS's simply don't support the "HDD password" function (NOT to be confused with a simple BIOS password, which can be circumvented by amateurs). I can pretty much guarantee you without even looking in your BIOS that if you haven't found the option yet, your BIOS doesn't support it and you're out of luck. It's a firmware problem and there's nothing you can do to add the feature short of flashing your BIOS with something like hdparm which is something so irresponsible that even I wouldn't attempt it. It's nothing to do with the drive or the included software. This is a motherboard specific problem.
ATA is nothing more than a set of instructions for the BIOS. What you're trying to set is an HDD User and Master password, which will be used to authenticate to the unique key stored securely on the drive. "User" password will allow the drive to be unlocked and boot to proceed as normal. Same thing with "Master". Difference is that a "Master" password is needed to change passwords in the BIOS or erase the encryption key in the drive, which renders all its data inaccessible and irrecoverable instantly. This is called the "Secure Erase" feature. Under the protocol, a 32-bit string of characters is supported, meaning a 32-character password. Of the few laptop manufacturers that support setting an HDD password in the BIOS, most limit characters to 7 or 8. Why every BIOS company doesn't support it is beyond me. Maybe Stallman was right about proprietary BIOS.
The only laptop (pretty much no desktop BIOS supports HDD password) I know will allow you to set a full-length 32-bit HDD User and Master password is a Lenovo ThinkPad T- or W- series. Last I heard some ASUS notebooks have such an option in their BIOS. Dell limits HDD password to a weak 8 characters.
I am much more familiar with the key storage in Intel SSD's than Samsung. Intel was I believe the first to offer on-chip FDE in their drives, the 320 series and on. Although that was AES 128-bit. I haven't looked extensively into how this Samsung series implements key storage, and nobody really knows at this point. Obviously customer service was of no help to you. I get the impression only five or six people in any tech company actually know anything about the hardware they sell. Intel seemed reluctant to cough up the specifics but eventually a company rep answered somewhere in a forum. Keep in mind that for the drive-manufacturers this feature is a total afterthought. They don't know or care anything about it and neither do 99.9% percent of their customers. It's just another advertisement bullet point on the back of the box.
Hope this helps!
Best Answer
The idea that SSDs are fragile snowflakes that will melt under the white heat of data is a bit of a mistake.
Many tests have been done where drives have been torture tested - tech report did one with last generation drives and basically its PRETTY hard to kill a modern SSD by excessive writes.
Bit of a disclaimer. SMART needs careful interpretation and will tell you something can go wrong. Not that it will. SSDs may have different smart attributes based on brand. Look up the documentation for your drive. I primarily own samsungs at the moment, so my answer references the drives I have and the software for them.
People have put SSDs through workloads significantly worse than what they have been rated for and they have survived. Treat them like any other storage. Back them up, of course, but they arn't typically going to die that fast.
There's a few factors to SSD endurance - process size (the larger the better but not always), bits per cell (slc is better than mlc is better than tlc) and so on. However most modern drive have impressive endurance,
However, most drives have a certain amount of 'spare' cells (aka overprovisioning) that should help mitigate dead/worn out nand. Amusingly, the best drives (enterprise class SLC) and worst consumer level TLC drives both have a lot of this. In short, the drive handles this so you don't have to.
Interestingly SSDs use the same SMART standards as any other drive, and either your manufacturers tools or your favourite SMART information tool will tell you the raw values. However the attributes may differ.
Annoyingly these standards are not too standard and interpretation is as much as art as a science. I just tend to rely on the little green health status label to tell me everything's fine. That said, samsung has a rough guide on interpreting the values and suggest except for total LBA's written, the raw values should be indicative of what's going on. Check your specific manufacturer to be sure,
Practically speaking, unless you're hitting the bathtub curve of premature death, or a wierd bug, your drive is at least going to live as long as the warranty.