I was going to suggest hacking e2fsck
to disable the specific checks for a last mount time or last write times in the future. These are defined in problem.c / problem.h, and used in super.c. But in looking, I discovered that E2fsprogs 1.41.10 adds a new option to /etc/e2fsck.conf
called broken_system_clock. This seems to be exactly what you need, and since you're using Red Hat Enterprise Linux 6, you should have 1.41.12, which includes this option. From the man page:
broken_system_clock
The e2fsck(8) program has some hueristics that assume that the
system clock is correct. In addition, many system programs make
similar assumptions. For example, the UUID library depends on
time not going backwards in order for it to be able to make its
guarantees about issuing universally unique ID’s. Systems with
broken system clocks, are well, broken. However, broken system
clocks, particularly in embedded systems, do exist. E2fsck will
attempt to use hueristics to determine if the time can no tbe
trusted; and to skip time-based checks if this is true. If this
boolean is set to true, then e2fsck will always assume that the
system clock can not be trusted.
Yes, the man page can't spell "heuristics". Oops. But presumably the code works anyway. :)
Fsck returns your filesystem to a consistent state. This is not necessarily the filesystem's “latest” state, because that state might have been lost in the crash. In fact, if there were half-written files at the time of the crash, then the filesystem was not left in a consistent state, and that is precisely what fsck is designed to repair. In other words, after running fsck, your filesystem is as up-to-date as it can get.
If your application requires feedback as to what is stored on the disk in case of a crash, you'll need to do more work than just writing to a file. You need to call sync
, or better fsync
, after a write operation to ensure that that particular write has been committed to the disk (but if you end up doing this a lot, your performance will drop down, and you'll want to switch to a database engine). You'll need a journaled filesystem configured for maximum crash survival (as opposed to maximum speed).
The property that an operation (such as a disk write) that has been performed cannot be undone (even in the event of a system crash) is called durability. It's one of the four fundamental properties of databases (ACID). If you need that property, read up on transactions.
Although filesystems are a kind of database, they're usually not designed to do well with respect to ACID properties: they have more emphasis on flexibility. You'll get better durability from a dedicated database engine. Then consider what happens in case your disk, and not your system crashes: for high durability, you also need replication.
Best Answer
I'm answering this in the general context of "journalled filesystems".
I think that if you did a number of "unclean shutdowns" (by pulling the power cord or something) sooner or later you'd get to a filesystem state that would require
fsck
or the moral equivalent of fsck,xfs_repair
. Theext4
fileystsm on my laptop for the most part just replays the journal on every reboot, clean shutdowns included, but every once in a while, it does a full-onfsck
.But ask yourself what "replaying the journal" accomplishes. Replaying a journal just ensures that the diskblocks of the rest of the fileystem match the ordering that the journal entries demand. Replaying a journal amounts to a small
fsck
, or to parts of a full onfsck
.I think there's some verbal sleight of hand going on: replaying a journal does part of what traditional
fsck
does, andxfs_repair
is exactly what the same kind of program thate2fs.fsck
(or any other filesystem'sfsck
) is. The XFS people just believed or their experience led them to not runningxfs_repair
on every boot, just to replaying the journal.