Linux – Why is Linux NFS server implemented in the kernel as opposed to userspace

kernellinuxnfs

I was just wondering why the Linux NFS server is implemented in the kernel as opposed to a userspace application?

I know a userspace NFS daemon exists, but it's not the standard method for providing NFS server services.

I would think that running NFS server as a userspace application would be the preferred approach as it can provide added security having a daemon run in userspace instead of the kernel. It also would fit with the common Linux principal of doing one thing and doing it well (and that daemons shouldn't be a job for the kernel).
In fact the only benefit I can think of running in the kernel would a performance boost from context switching (and that is a debatable reason).

So is there any documented reason why it is implemented the way it is? I tried googling around but couldn't find anything.


There seems to be a lot of confusion, please note I am not asking about mounting filesystems, I am asking about providing the server side of a network filesystem. There is a very distinct difference. Mounting a filesystem locally requires support for the filesystem in the kernel, providing it does not (eg samba or unfs3).

Best Answer

unfs3 is dead as far as I know; Ganesha is the most active userspace NFS server project right now, though it is not completely mature.

Although it serves different protocols, Samba is an example of a successful file server that operates in userspace.

I haven't seen a recent performance comparison.

Some other issues:

  • Ordinary applications look files up by pathname, but nfsd needs to be able to look them up by filehandle. This is tricky and requires support from the filesystem (and not all filesystems can support it). In the past it was not possible to do this from userspace, but more recent kernels have added name_to_handle_at(2) and open_by_handle_at(2) system calls.
  • I seem to recall blocking file-locking calls being a problem; I'm not sure how userspace servers handle them these days. (Do you tie up a server thread waiting on the lock, or do you poll?)
  • Newer file system semantics (change attributes, delegations, share locks) may be implemented more easily in kernel first (in theory--they mostly haven't been yet).
  • You don't want to have to check permissions, quotas, etc., by hand--instead you want to change your uid and rely on the common kernel vfs code to do that. And Linux has a system call (setfsuid(2)) that should do that. For reasons I forget, I think that's proved more complicated to use in servers than it should be.

In general, a kernel server's strengths are closer integration with the vfs and the exported filesystem. We can make up for that by providing more kernel interfaces (such as the filehandle system calls), but that's not easy. On the other hand, some of the filesystems people want to export these days (like gluster) actually live mainly in userspace. Those can be exported by the kernel nfsd using FUSE--but again extensions to the FUSE interfaces may be required for newer features, and there may be performance issues.

Short version: good question!

Related Question