To control how Linux caches things refer to this
https://www.kernel.org/doc/Documentation/sysctl/vm.txt
In particular look at vfs_cache_pressure, you probably want a really low value or maybe even zero (1 sounds a bit safer to me though):
vfs_cache_pressure
------------------
Controls the tendency of the kernel to reclaim the memory which is used for
caching of directory and inode objects.
At the default value of vfs_cache_pressure=100 the kernel will attempt to
reclaim dentries and inodes at a "fair" rate with respect to pagecache and
swapcache reclaim. Decreasing vfs_cache_pressure causes the kernel to prefer
to retain dentry and inode caches. When vfs_cache_pressure=0, the kernel will
never reclaim dentries and inodes due to memory pressure and this can easily
lead to out-of-memory conditions. Increasing vfs_cache_pressure beyond 100
causes the kernel to prefer to reclaim dentries and inodes.
Also you may want to modify swappiness
so that you never swap data or make it so that it only happens in extreme cases.
The drop_caches
option might be handy for explicitly dropping the data you don't want cached anymore.
I'm sure there are probably other options that may help, so review the kernel documentation.
To apply them I'd put the settings you want to change in /etc/sysctl.conf
or whatever your OS has to restore them at boot.
There's a bit of an inconsistency or at least, ambiguity, in your story here:
I'd still rather lose it all than facing an 'unable to mount', 'wait for this 10 minutes fsck'
Implies -- although you don't actually say it -- that this is a problem you are actually experiencing. But then:
e2fsprogs-libs (dependency to jfsutils) seems to be hellishly difficult to compile in my distribution.
Meaning you don't have any fsck at all, since e2fsprogs-libs
is a dependency for e2fsprogs
which provides e2fsck
. So perhaps you are still in a planning stage here and have not even tested the system with, e.g., ext4
, but instead jumped to the conclusion that you should start with JFS? Is there any particular reason for that?
I've noticed on the raspberry pi exchange (the pi's primary storage is also a SD card) that a significant number of users seem to be very frustrated by problems of this sort, even though the majority (including myself) have never had it at all. At first I assumed these were people ignorant of the fact that the system should be cleanly shut down, but that is not a hard point to grasp when explained, and there are people who report it even though the system HAS been shut down properly.
You've already said you need this to be able to tolerate power cuts (which is fair enough), but I mention this because it implies there are some pis, or some SD cards, or some combination of both, that are just prone to corrupting the filesystem due to some event (surge?) that occurs regularly either when the plug is pulled, or when it is put back in. I also have NOT seen -- and there's been plenty of time for plenty of people to try -- ANY reports of someone saying they've switched to btrfs or jfs or whatever and now the problem is solved.
The other mysterious thing about this is even if people are yanking the cord, this should not regularly result in an unusable filesystem. Certainly I've done it a bunch of times w/ the pi, and scores if not hundreds of times w/ a regular linux box (the power was cut, the system has become unresponsive, I'm exhausted and angry, etc.) and while I've seen minor data loss, I've never seen a filesystem corrupted to the point of being unusable after a quick fsck.
Again, presuming all these reports are true (I don't see why numbers of people would lie about it), there's something much more going on than just not cleanly unmounting, but it seems to only affect a small percentage of users, implying again some kind of common hardware defect.
On the pi I write -y
to /forcefsck
in a boot script, so that on the next boot it is run automatically and any problems are fixed, regardless of whether this appears to be necessary or not. On a 700 Mhz single core this takes ~10 seconds for a 12 GB filesystem containing ~4 GB of data. So "10 minutes" sounds like an incredibly long time, especially since you've already said "This is the small filesystem for write!".
You might also consider calling sync
at regular intervals.
Finally, you should update the question with more factual, specific details of the problems you have actually encountered, and less hyperbole. Otherwise it just looks too much like a premature XY problem, which will likely get quickly skipped over by people with a lot of experience and potential advice for you.
Best Answer
That's rather an odd question because you don't run the kernel like you run a program. The kernel is a platform to run programs on. Of course there is setup and shutdown code but it's not possible to run the kernel on its own. There must always be a main "init" process. And the kernel will panic if it's not there. If init tries to exit the kernel will also panic.
These days the init process is something like systemd. If not otherwise specified the kernel will try to run a program from a list of locations starting with
/sbin/init
. See the init Param here http://man7.org/linux/man-pages/man7/bootparam.7.html in an emergency you can boot Linux withinit=/bin/bash
. But notice how you always specify a file on the file system to run.So the kernel will panic if it starts up an has no file system because without one there is no way to load init.
Some confusion may arise because of an initialisation phase of the kernel. An initial ramdisk is loaded from an image on disk containing vital drivers and setup scripts. These are executed before the file system is loaded. But make no mistake the initial ramdisk is itself a file system. With an initial ramdisk
/init
is called (which is stored on the initial ramdisk). In many distributions it is ultimately this which calls/sbin/init
. Again without a file system, this is impossible.