New answer (2015-03-22)
(Note: This answer is simpler than previous, but not more secure. My first answer is stronger because you could keep files read-only by fs mount options before permission flags. So forcing to write a files without permission to write won't work at all.)
Yes, under Debian, there is a package: fsprotect (homepage).
It use aufs
(by default, but could use another unionfs
tool) to permit live session changes but in RAM by default, so everything is forgotten at reboot.
You could install them by running simply:
apt-get install fsprotect
Once done, from online doc:
After that:
- Edit
/boot/grub/menu.lst
or /etc/default/grub2
or /etc/lilo.conf
and add "fsprotect=1G
" to kernel parameters.
- Modify 1G as needed.
- Apply changes (i.e. run
update-grub
)
- Edit
/etc/default/fsprotect
if you want to protect filesystems other than /
.
- reboot
You may also want to password protect the grub bootloader or forbid any changes to it.
From there, if some file is protected against changes, for sample by
chmod ugo-w myfile
if you use for sample vi myfile
and try to write on it with command :w!
, this will work and your myfile
became changed. You may reboot in order to retrieve unmodified myfile
.
That's not even possible with my following first solution:
Old (first) answer:
Yes, it is a strong solution, but powerfull!
Making r/o useable
You have to mount some directories in rw, like /var
, /etc
and maybe /home
. This could by done using aufs or unionfs. I like this another way, using /dev/shm
and mount --bind
:
cp -a /var /dev/shm/
mount --bind /dev/shm/var /var
You could before, move all directories who have not to change in normal operation in a static-var
, than create symlinks in /var:
mkdir /static-var
mkdir /static-var/cache
mkdir /static-var/lib
mv /var/lib/dpkg /static-var/lib/dpkg
ln -s /static-var/lib/dpkg /var/lib/dpkg
mv /var/cache/apt /static-var/cache/apt
ln -s /static-var/cache/apt /var/cache/apt
... # an so on
So when remounting in ro, copying /var
in /dev/shm
won't take too much space as most files are moved to /static-var
and only symlinks are to be copied in ram.
The better way to do this finely is to make a full power-cycle, one day of full work and finely run a command like:
find / -type f -o -type f -mtime -1
So you will see which files needs to be located on read-write partition.
Logging
As in this host no writeable static memory exist, in order to store history and other logs, you have to config a remote syslog
server.
echo >/etc/syslog.conf '*.* @mySyslogServer.localdomain'
In this way, if your system break for any reason, everything before is logged.
Upgrading
When running whith some mount --bind
in use, for doing such an upgrade while system is in use (whithout the need of running init 1
, for reducing down-time), the simplier way is to re-build a clean root, able to do the upgrade:
After remounting '/' in read-write mode:
mount -o remount,rw /
for mpnt in /{,proc,sys,dev{,/pts}};do
mount --bind $mnpt /$mnt$mpnt;
done
chroot /mnt
apt-get update && apt-get dist-upgrade
exit
umount /mnt/{dev{/pts,},proc,sys,}
sync
mount -o remount,ro /
And now:
shutdown -r now
As with all things pertaining to security, there aren't any guarantees, but you also need to balance risk (and cost) against probability. From experience (and I've been running dozens of *nix boxen since the dark ages), I've never really had significant power-caused filesystem corruption.
Some of these machines were even running on non-journalled filesystems (ufs and ext2 usually). Some of them were embedded, and a few were mobile phones like the Nokia N900 — so a good power supply wasn't at all guaranteed.
It's not that filesystem corruption can't happen, it's just that the probability of it happening is low enough that it shouldn't worry you. Still, no reason not to hedge your bets.
In answer to your literal questions:
- At least the first book you referenced was written before
ext4
— when the author suggests using ext3
, they're really saying ‘don't use unstable or non-journalled filesystems like ext2
’). Try ext4
, it's quite mature, and has some decent options for non-spinning disks which may extend the life expectancy of your flash device.
- Chances are it would lose you the last block or two, not the entire file. With a journalled filesystem, this will be about the only loss. There are failure scenarios where I could see random data sprayed across the file, but they seem about as likely as a micrometeorite smashing right through your embedded device.
- See 2. Nothing is 100.00% safe.
If you have a second IDE channel, stick a second CF card in there and grab a backup of the filesystem periodically. There are a few ways to do this: rsync
, cp
dump
, dd
, even using the md(4)
(software RAID) device (you add the second drive occasionally, let it sync, then remove it — if both devices are live all the time, they run the same risk of filesystem corruption). If you use LVM, you can even grab snapshots. For a data collection embedded device, I'd just use am ad hoc solution which mounts the second filesystem, copies over the data log, the immediately unmounts it. If you're worried about the device having a good boot image, stick a second copy of the boot manager and all necessary boot images on the second device and configure the computer to boot from either CF card.
I wouldn't trust a second copy on the same device because storage devices fail more often than stable filesystems. Much more often, in my experience so far (at work, there was a bitter half-joke about the uncannily high chances of Friday afternoon disk failures. It was almost a weekly event for a while). Whether the disk is spinning or not, it can fail. So keep your eggs in two baskets if you can, and you'll protect your data better.
If the data is particularly sensitive, I'd pay regular visits to the device, swap the backup CF for a fresh one and reboot, letting it fsck
all its filesystems for good measure.
Best Answer
From a filesystem perspective using ext3 or ext4 with default options will normally provide you with enough crash consistency. You certainly won't suffer filesystem loss or damage to any files that haven't been written to right before the power loss.
There are many considerations about how to handle crash consistency on any filesystem. If your application only creates new files, or overwrites existing files by creating a temporary file and atomically overwriting with rename, then the default data=ordered mode of ext4 will be fine. Though until a call to fsync() on the file AND directory entry completes, or the OS flushes its cache there is no guarantee that the data will be there after a power failure. That's also assuming your storage devices honor the fsync().
If the application needs to guarantee consistency between file metadata and data while not caring about performance you could use data=journal so that all changes to files and filesystem metadata will be journaled, rather than just metadata. This will avoid incomplete write situations like a file size getting larger, but the data that was appended being lost and replaced with null chars.