I've got a self-written FUSE file system I mount on my NFS server on mountpoint /export/example/mount
. Then I export a parent directory of the FUSE via NFS. The /etc/exports
contains the options nohide,crossmnt,subtree_check,ro,no_root_squash
and allows free access to everyone:
/export/example *(nohide,crossmnt,subtree_check,ro,no_root_squash)
I can mount this export on my NFS client and access it. As soon as I access the FUSE within the NFS my client hangs until I umount
the NFS (and I need to use option -f
to accomplish that).
I've tried mounting the FUSE as my working user and as root. The results are the same.
The server is running a Ubuntu 12.04, the client a SuSE 9.3. The FUSE is written in Python and works locally without any trouble. Only the export via NFS fails. I have no security restrictions as all this is on a private network with only trusted users.
Does anybody have an idea what could cause my trouble or (even better) how to solve the issue?
I've thought about replacing the NFS with SSHFS to work around the problem, but that does not work as the client system is too old to support SSHFS (as it is based on FUSE, and FUSE isn't supported).
Best Answer
Most Linux distributions ship with a kernel that does not allow exporting a FUSE mounted file system using NFSv2 or NFSv3. Your choices are: 1. Implement your file system in kernel space. 2. Export it in NFSv4, which would require an fsid=
I myself use 2. In the below illustration, commands start with
#
are run on server, commands start with$
are run on client.This is my server configuration, as you can see I am exporting a FUSE mount point:
Here is what I did on my client:
To verify that your failure is caused by exporting fuse in NFS v2/v3, export that mount point specificly without NFS v4 (fsid), and see if you get an error:
If on the server you export the mount point umounted, and mount it with fuse later, you should see in your log if you attempt to use nfs client: