Kernel Modules – Disadvantages of Linux Kernel Modules

kernelkernel-moduleslinux-kernel

I am trying to understand the disadvantages of using Linux kernel modules. I understand the benefits of using them: the ability to dynamically insert code into a running system without having to recompile and reboot the base system. Given this strong advantage, I was guessing most of kernel code should then be in kernel modules instead as part of base kernel, but that does not seem to be the case — a good number of core subsystems (like memory management) still go into the base kernel.

One reason I can think of is that kernel modules are loaded very late in the boot process and hence core functionality has to go in the base kernel. Another reason I read was about fragmentation.

I didn't really understand why kernel modules cause memory fragmentation, can someone please explain? Are there any other downsides of using kernel modules?

Best Answer

Yes, the reason that essential components (such as mm) cannot be loadable modules is because they are essential -- the kernel will not work without them.

I can't find any references claiming the effects of memory fragmentation with regard to loadable modules is significant, but this part of the LLKM how-to might be interesting reading for you.

I think the question is really part and parcel of the issue of memory fragmentation generally, which happens on two levels: the fragmentation of real memory, which the kernel mm subsystem manages, and the fragmentation of virtual address space which may occur with very large applications (which I'd presume is mostly the result of how they are designed and compiled).

With regard to the fragmentation of real memory, I do not think this is possible at finer than page size (4 KB) granularity. So if you were reading 1 MB of virtually contiguous space that is actually 100% fragmented into 1024 pages, there may be 1000 extra minor operations involved. In that bit of the how-to we read:

The base kernel contains within its prized contiguous domain a large expanse of reusable memory -- the kmalloc pool. In some versions of Linux, the module loader tries first to get contiguous memory from that pool into which to load an LKM and only if a large enough space was not available, go to the vmalloc space. Andi Kleen submitted code to do that in Linux 2.5 in October 2002. He claims the difference is in the several per cent range.

Here the vmalloc space, which is where userspace applications reside, would be that which is potentially prone to fragment into pages. This is simply the reality of contemporary operating systems (they all manage memory via virtual addressing). We might infer from this that virtual addressing could represent a performance penalty of "several percent" in userland as well, but in so far as virtual addressing is necessary and inescapable in userland, it is only in relation to something completely theoretical.

There is the possibility for further compounding fragmentation by the fragmentation of a process's virtual address space (as opposed to the real memory behind it), but this would never apply to kernel modules (whereas the last paragraph apparently could).

If you want my opinion, it is not worth much contemplation. Keep in mind that even with a highly modular kernel, the most used components (fs, networking, etc) will tend to be loaded very early and remain loaded, hence they will certainly be in a contiguous region of real memory, for what it is worth (which might be a reason to not pointlessly load and unload modules).

Related Question