The nfsstat -c
program will show you the NFS version actually being used.
If you run rpcinfo -p {server}
you will see all the versions of all the RPC programs that the server supports. On my system I get this output:
$ rpcinfo -p localhost
program vers proto port
100000 2 tcp 111 portmapper
100000 2 udp 111 portmapper
...
100003 2 tcp 2049 nfs
100003 3 tcp 2049 nfs
100003 4 tcp 2049 nfs
100003 2 udp 2049 nfs
100003 3 udp 2049 nfs
100003 4 udp 2049 nfs
...
This shows me that my NFS server (localhost
in this example) offers versions 2, 3, and 4 of the NFS protocol all over UDP and TCP.
In short, the fsid is the way client and server identify an export after it's mounted.
As the man page states, the fsid will be derived from the underlying filesystem, if not specified.
The four exports have the same fsid, so it's possible that when client1 is asking about the files from its mount, the server thinks it's trying to access client4's export (assuming it's keeping the latest occurrence of the same fsid only.)
I guess there are a few ways to validate this hypothesis, for instance by checking that one (and only one) of the 4 clients will work. Also by keeping only the client1 export, without the other 3, and confirming client1 will work then.
See also this answer for a way to query the fsid from a client, using the mountpoint -d
command, which you could use from the 4 clients to confirm the 4 mounts have the same fsid.
Why is that a solution?
Because with distinct fsid's, the exports will look distinct to the NFS server, so it will properly match client accesses to their corresponding mounts.
Should we use fsid on every export?
Yes, I think that's a good practice, it ensures you'll keep the control and changes in the underlying storage devices and exports will not affect your clients.
(In my case, I recall adopting it since some of my NFS servers with disks on a SAN would sometimes scan disks in a different order, so after a reboot /dev/sdh would suddenly become /dev/sdj. Mounting using labels ensured it would be mounted at the correct location, but the fsid would change and clients would get lost. This is before the ubiquity of UUIDs, which apparently are now supported and are of course a much better solution for this, that wouldn't break when disks are scanned in a different order. But, still, specifying the fsid explicitly is not a bad idea, lets you keep full control.)
Best Answer
unfs3
is dead as far as I know; Ganesha is the most active userspace NFS server project right now, though it is not completely mature.Although it serves different protocols, Samba is an example of a successful file server that operates in userspace.
I haven't seen a recent performance comparison.
Some other issues:
nfsd
needs to be able to look them up by filehandle. This is tricky and requires support from the filesystem (and not all filesystems can support it). In the past it was not possible to do this from userspace, but more recent kernels have addedname_to_handle_at(2)
andopen_by_handle_at(2)
system calls.setfsuid(2)
) that should do that. For reasons I forget, I think that's proved more complicated to use in servers than it should be.In general, a kernel server's strengths are closer integration with the vfs and the exported filesystem. We can make up for that by providing more kernel interfaces (such as the filehandle system calls), but that's not easy. On the other hand, some of the filesystems people want to export these days (like gluster) actually live mainly in userspace. Those can be exported by the kernel nfsd using FUSE--but again extensions to the FUSE interfaces may be required for newer features, and there may be performance issues.
Short version: good question!