I found the problem: I had to add to the NFS options fsid
so now the full list looks like this:
fsid=1,crossmnt,rw,no_root_squash,async,no_subtree_check
The fact is yast doesn't warn here. I could fix the problem because I ran exportfs
and then I got the error regarding the fsid
.
In short, the fsid is the way client and server identify an export after it's mounted.
As the man page states, the fsid will be derived from the underlying filesystem, if not specified.
The four exports have the same fsid, so it's possible that when client1 is asking about the files from its mount, the server thinks it's trying to access client4's export (assuming it's keeping the latest occurrence of the same fsid only.)
I guess there are a few ways to validate this hypothesis, for instance by checking that one (and only one) of the 4 clients will work. Also by keeping only the client1 export, without the other 3, and confirming client1 will work then.
See also this answer for a way to query the fsid from a client, using the mountpoint -d
command, which you could use from the 4 clients to confirm the 4 mounts have the same fsid.
Why is that a solution?
Because with distinct fsid's, the exports will look distinct to the NFS server, so it will properly match client accesses to their corresponding mounts.
Should we use fsid on every export?
Yes, I think that's a good practice, it ensures you'll keep the control and changes in the underlying storage devices and exports will not affect your clients.
(In my case, I recall adopting it since some of my NFS servers with disks on a SAN would sometimes scan disks in a different order, so after a reboot /dev/sdh would suddenly become /dev/sdj. Mounting using labels ensured it would be mounted at the correct location, but the fsid would change and clients would get lost. This is before the ubiquity of UUIDs, which apparently are now supported and are of course a much better solution for this, that wouldn't break when disks are scanned in a different order. But, still, specifying the fsid explicitly is not a bad idea, lets you keep full control.)
Best Answer
You can pass from cifs mount to nfs export via a fuse filesystem, though I don't think I would recommend it for something as essential as backup.
When I tried this once I looked for a fuse filesystem that would be as transparent as possible, and ending up with
fuse-convmvfs
. This software is intended to convert filenames from one encoding to another, but if you configure it to the same encoding at both sides, it seems to work as you need.Quite simply, if you have your cifs mount at
/mnt/samba
, you can mount your fuse at/mnt/fuse
and export this directory by nfs using an/etc/exports
entry likeand the commands
The
user_allow_other
part is probably not needed for nfs export. While this is ok as an experiment, note that nfs is dangerous with filesystems that do not use the same inode in a repeatable way, and that is probably why nfs on top of cifs is not implemented. Adding the fuse layer is not necessarily going to fix this. Perhaps if you can independently produce a list of md5 sums of each file locally on the cifs server, and locally on the backup machine, and compare the two, you might have some confidence in a backup.