/boot
is not encrypted (the BIOS would have no way to decrypt it...). It could be ext4, but there really isn't any need for it to be. It usually doesn't get written to. The BIOS reads GRUB from the MBR, then GRUB reads the rest of itself, the kernel, and the initramfs from /boot. The initramfs prompts you for the passphrase. (Assumably, its using cryptsetup
and LUKS headers.).
The encryption is performed at a layer below the filesystem. You're using something called dm-crypt (that's the low-level in-kernel backend that cryptsetup uses), where "dm" means "Device Mapper". You appear to also be using LVM, which is also implemented by the kernel Device Mapper layer. Basically, you have a storage stack that looks something like this:
1. /dev/sda2 (guessing it's 2, could be any partition other than 1)
2. /dev/mapper/sda2_crypt (dm-crypt layer; used as a PV for VG archon)
3. LVM (volume group archon)
4. /dev/mapper/archon-root (logical volume in group archon)
5. ext4
You can find all this out with the dmsetup
command. E.g., dmsetup ls
will tell you the Device Mapper devices in list. dmsetup info
will give some details, and dmsetup table
will give technical details of the translation the mapping layer is doing.
The way it works is that the dm-crypt layer (#2, above) "maps" the data by performing crypto. So anything written to /dev/mapper/sda2_crypt is encrypted before being passed to /dev/sda2 (the actual hard disk). Anything coming from /dev/sda2 is decrypted before being passed out of /dev/mapper/sda2_crypt.
So any upper layers use that encryption, transparently. The upper layer you have using it first is LVM. You're using LVM to carve up the disk into multiple logical volumes. You've got (at least) one, called root, used for the root filesystem. It's a plain block device, so you can use it just like any other—you can put any filesystem you'd like there, or even raw data. The data gets passed down, so it will be encrypted.
Things to learn about (check manpages, etc.):
/etc/crypttab
- LVM (some important commands:
lvs
, pvs
, lvcreate
, lvextend
)
cryptsetup
All three data journaling modes should leave the filesystem itself fully intact after a power failure. So it should always mount without errors. The difference is only in the data in your files; data=writeback
mode may leave stale data (i.e., what was stored in the disk sectors before the writes your app did). data=ordered
and data=journaled
should not do this.
Most likely what you're seeing is that I/O barriers aren't working on your setup. First, make sure you're not mounting with barrier=0
/nobarrier
. That boosts performance, but will cause corruption on power failure.
If I/O barriers are on, it's also possible you're passing through a storage layer that doesn't support them. On older releases, LVM didn't and various mdraid levels didn't. (This was fixed in Linux 2.6.33; so only if you're running Lucid still.)
Finally, it's possible your disks are telling lies. Disks have write caches. Especially with NCQ, they're supposed to only tell the OS they've written data when they've actually done so, but they've been known to tell the OS its written when its only in the disk's write cache. Increases performance. At least as long as the power stays on. You can try disabling the write cache on the disks, though you'll take a performance hit for this.
Note also that flash-memory disks have a lot of work to do under the hood, and many of them don't handle power failure well. (For example, wear leveling sometimes requires that a full flash block of data be moved. If the power fails in the middle, bad things happen on some flash disks.)
Finally... have you considered an UPS?
Best Answer
Good news: it's expected.
-- Ted Ts'o
Unfortunately this doesn't explain what it means.
A literal reading of the manpage suggests that relatime suppresses both in-memory updates and disk writes. lazytime only suppresses disk writes (and applies to mtime as well as atime). This makes sense to me given the discussions that led to the implementation of lazytime. IOW it would be very easy to write a test for relatime. But the effect of lazytime is only visible if you look at disk writes, or test what happens with unclean shutdowns.
Personally the effect of lazytime on mtime sounds a bit odd. Maybe it's a nice optimization for systems with high uptime, but I don't know about the average desktop... And nowadays that's actually a laptop; we're not supposed to be so gung-ho about undefined or weirdly partially-defined behaviour on powerfail. It's even more special-case if you consider copy-on-write filesystems like btrfs; the "inode" is likely to be updated even when the filesize doesn't change. By contrast
relatime
is lovely and deterministic.And the mtime optimization only seems to be helpful if you have writes to a large number of files which don't change their size. I'm not sure there's even a common benchmark for that. Some very non-trivial database workload, I suppose.
Seriously Ted, why didn't we get
lazyatime
?