Run exportfs -a
on the server
machine.
Also both machines have all of the needed NFS support packages and have nfs support? You can find if the kernel supports a specific filesystem by examining the output of cat /proc/filesystems
.
And yes, the filename of the export file needs to be /etc/exports
Finally, check to see if you have enabled the NFS daemons during startup.
In short: Yes, simultaneous writes from multiple NFS clients will be corrupted.
Simultaneous appends locally are nicely interleaved, since the OS knows the file is opened in append mode, and atomically seeks to the current end of file before each write call, regardless of other processes that may have extended the file in the meanwhile.
But on NFS, no such luck. The Linux NFS FAQ states it plainly:
A9. Why does opening files with O_APPEND on multiple clients cause the
files to become corrupted?
A. The NFS protocol does not support atomic
append writes, so append writes are never atomic on NFS for any
platform.
Most NFS clients, including the Linux NFS client in kernels newer than 2.4.20, support "close to open" cache consistency,
A8. What is close-to-open cache consistency?
A. Perfect cache
coherency among disparate NFS clients is very expensive to achieve, so NFS settles for something weaker that satisfies the requirements of most everyday types of file sharing. [...]
When the application
closes the file, the NFS client writes back any pending changes to the
file so that the next opener can view the changes. This also gives the
NFS client an opportunity to report any server write errors to the
application via the return code from close(). This behavior is
referred to as close-to-open cache consistency.
The NFS write operations just contain a position to write to, and the data to be written. There's no provision for centrally coordinating where the end of file is, which is what would be needed to make sure clients don't overwrite each other.
(The page does seem a bit old, since it talks mostly about Linux kernel version 2.6, with no mention of 3.x. NFS version 4 is however mentioned in relation to kernel support, so it could be assumed the answer applies to v4 also.)
A little test made on a recent-ish Linux and an NFS v3 share:
Writing numbers out in a shell loop (for ((i=0 ; i<9999 ; i++)) ; do printf "$id %06d\n" $i >> testfile ; done
), on two clients at the same time results in nicely corrupted output. Part of it:
barbar 001031
foo 000010
32
foo 000011
33
foo 000012
Here, the first loop on one machine wrote lines with barbar
, while another loop on another machine wrote the foo
lines. The line that should say barbar 001032
is written starting at the same position as foo 000010
line, and only the final numbers of the longer line are visible. (Note that in this case the file is actually opened and closed for each printf
since the redirection is inside the loop. But it only helps in finding what the end of file was when the file was opened.)
If the file is kept open the whole time, larger blocks may be overwritten, since the promise is only that the client system writes changes to the server when the file is closed. Even truncating the file when opening doesn't change this much, since the truncation only clears the file, but does not prevent further writes by the other client when it closes the file.
Best Answer
This doesn't exactly answer your question, but I'd advise against using rsnapshot over NFS. You are negating the primary benefit of rsync which is the ability to transfer a small amount of data over the network to detect large portions of identical data. Rsync is designed to run over ssh where it can invoke an rsync server of the other side of the connection and communicate with it via it's own optimized protocol that uses a rolling checksum to identify identical data. When rsync is run over NFS and it thinks the file might be different due to timestamps or size, it must download the entire file over NFS even if it's only a small change since it has no way of querying the remote side for checksums across the data.