Most Linux distributions ship with a kernel that does not allow exporting a FUSE mounted file system using NFSv2 or NFSv3. Your choices are:
1. Implement your file system in kernel space.
2. Export it in NFSv4, which would require an fsid=
I myself use 2. In the below illustration, commands start with #
are run on server, commands start with $
are run on client.
This is my server configuration, as you can see I am exporting a FUSE mount point:
# mount | tail -n1
convmvfs on /mnt/gb2312 type fuse.convmvfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0)
# grep gb2312 /etc/exports
/mnt/gb2312 192.168.0.0/16(no_subtree_check,fsid=0)
Here is what I did on my client:
$ sudo mount -t nfs4 server:/ /mnt/
$ ls /mnt
Downloads IMAGES Library lost+found
To verify that your failure is caused by exporting fuse in NFS v2/v3, export that mount point specificly without NFS v4 (fsid), and see if you get an error:
# exportfs -a
exportfs: /mnt/gb2312 requires fsid= for NFS export
If on the server you export the mount point umounted, and mount it with fuse later, you should see in your log if you attempt to use nfs client:
# tail /var/log/syslog
Aug 18 03:54:31 server rpc.mountd[17183]: Cannot export /mnt/gb2312, possibly unsupported filesystem or fsid= required
Aug 18 04:00:52 server rpc.mountd[17183]: Caught signal 15, un-registering and exiting.
After experimenting and searching for a while longer, I was finally able to solve it.
As found in this thread (about Fedora, but close enough to Mac), it seems that, while nfs3
will allow sudo mount <...> <server-ip>:/export/share <...>
, nfs4
seems to require sudo mount <...>
<server-ip>:/
<...>
(mounting the "root" directory of the export, as opposed to the exported directory itself). After correcting that, my directories mount fine, although it appears to tether the /export
directory instead of the /export/share
directory (adding one more directory level). Not a big deal, but worth noting if there happens to be a fix for that. EDIT: I was wrong, turns out you can export the /share
directory specifically by using sudo mount <...>
<server-ip>:/share
<...>
, basically just skipping the root directory of the exported directory.
As an interesting side-note, if I change the /etc/export
line on the server from /export/share *(insecure,no_subtree_check,rw,nohide,sync)
to /export/share *(insecure,
fsid=0
,no_subtree_check,rw,nohide,sync)
, the target directory on the client NFS/Share_Media
seems to become infinitely self-nested once mounted, for some reason. Just figured I'd include that observation incase someone from the future has the same problem with their flying car.
Best Answer
You can force ZFS to be loaded early by including it into a file in
/etc/modules-load.d/*.conf
. Say, we create/etc/modules-load.d/zfs.conf
with the following content:The code itself also comes with a
systemd
service (actually a couple of them) and you can add system dependencies with the latest mount implementations. For example:(Disclaimer: I am aware that
x-systemd.requires
works on the latest Arch and Debian Testing, may not be there yet in Ubuntu 16.04, although it is in the mount man page)