MacOS – Cross-Platform File System file sharing between MAC, WINDOWS, LINUX

filesystemmacosntfsunixwindows

I've a laptop with triple boot, Yosemite, Windows 8.1, CentOS 7, and need a partition for sharing files between the 3 os's, i've been using exFAT, as it is supported by OSX & Windows but had some issues with linux and after try to mount it on linux, osx didn't recognize it, and i was unable to mount it, then just windows recognized it and after 1 day of using it got corrupted, i had to get my data back with TesDrive.

Now i'm looking for the most accepted file sys to share files between them, so far i've found these options:

  • NTFS: Using MacFUSE & NTFS-3G to enable read/write access, or Paragon NTFS, but i've heard some bad things about stability & speed of those options… i would not like to lose my data.

  • HFS+: Using MacDrive Pro in windows to have read/write access to mac partition, i guess there's a way to allow linux support for hfs+. Heard some good & bad things about MacDrive but still not so sure…

  • exFAT: This way is the way i've already tried, bad experience with it, but most of the people seems to approve this option. Maybe i did something wrogn, BUT still the data loss is a problem…

  • FAT32: Limited drive size. Limited permission settings. Not the one i would like to pick.

Needs hierarchically ordered:

  1. Stability (No data loss)
  2. Great File Size support
  3. Journaling
  4. Speed

UPDATE 1:
After more research i've found Tuxera NTFS for mac, seems to be nice, but… how good is it in real life? does it offer real NTFS full support as it says?, great Stability? Speed? Journaling?
Is it worth for the price?

Best Answer

I've done this kind of thing for years and can probably help you avoid the same pains I went through.

Cloud storage would be ideal for some use-cases, but sketchy on privacy/security without additional work, and not necessarily suitable for use cases involving a large amount of data. (I've worked around security/privacy issues with transparent per-file encryption, and use this in parallel with the solution I've outlined below, for different use cases.)

Here are the local storage solutions in increasing order of viability (which is inherently subjective and dependent on specific use cases):

  1. exFAT: At the bottom only because of my own lack of experience with it, and its relative newness. There are compatibility problems between the platforms because of different block sizes. Apparently, formatting the drive in Windows with a block size smaller than 1024 bytes might work.
  2. NTFS: I've had all kinds of problems with NTFS-3G, going back and forth between Windows, Mac, and Linux. File corruption, lost data, etc. This was a few years ago, maybe it's better now - but it was "sold" as solid then and it wasn't.
  3. FAT32: In my experience, this is the only truly "cross-platform" file system that can bridge Mac, Linux, and Windows. (And cameras, and TVs, and...) There is a per-file 4GB size limit and 2TiB total volume size limit. You can in theory overcome the 32GB FAT32 limitation, with Fat32Formatter, but I don't know how compatible it is across systems. In theory, FAT+ allows for 256GiB files and using a higher block size
  4. A virtual machine sharing its native filesystem to the host OS via CIFS: This is hands-down the best solution for most of my use cases.

Years ago when I got fed up with the data corruption using NTFS-3G, I started using a small VM running Windows 2000, and shared an NTFS volume "natively" to the host OS via CIFS. Performance can't compare to directly attached storage, but I finally got to say goodbye to data corruption and the distrust and headaches it caused. NTFS formatted from Windows 2000, worked flawlessly and interchangeably with more modern versions of Windows, including switching back and forth between Windows 2000 in a VM, and Windows Vista (at the time).

But still, NTFS just wasn't robust enough for reliably storing massive amounts of data over long periods of time, even if in a mirrored configuration (and especially in a RAID5 configuration). Mainly due to bitrot and lack of checksumming. Granted, it was the best thing around for a long time, but not any more.

Now, the only "cross-platform" file system I use is ZFS, presented via CIFS by Linux running in a VM. (I'm also increasingly using BTRFS which recently seems to have crossed some threshold of stability for my use cases. For a long time I only used it experimentally and it often let me down.)

I don't use ZFS for Mac OS, only ZFS on Linux. (I used to use an OpenSolaris VM to host ZFS for the sake of purity and support for the most up-to-date ZFS features, until Oracle messed it up.)

I tried ZFS for Mac a while back and it was too unstable and outdated. Maybe it's fine now, but my VM solution is flawless. And like I said, I'm increasingly using BTRFS anyway, which is a better match in many ways for my requirements (the first and foremost of which is rock-solid reliability - which ZFS has always provided).

I triple-boot my Macs, and when I'm not running Linux natively, I run the same native Linux installation in a VM. Linux is perfectly happy alternating between running in a VM with guest additions, and natively. I'm almost always running a Linux VM for "native" ZFS or BTRFS volume access via CIFS, when not running it natively.

I've seamlessly adjusted most of my workflows to accommodate the slower CIFS access to large "cross-platform" reliable storage. For example, if I need fast access to lots of working data, it's usually in an application that is unique to that particular host OS, and it doesn't need to be accessible across platforms. So I just use whatever fast local SSD storage the OS is available natively, and make regular copies to the slower "cross-platform" storage - or only when the project is done, depending on the specific use case.

Tip: If you do go the VM route, you'll be tempted to share the VM file system via a bridged adapter. The advantage to that is that the VM will have its own IP address on the same subnet, and the storage will be accessible even by other computers on that subnet. However, the drawbacks to a bridged adapter are 1) It is tied to a specific physical adapter and if you switch from, say, wired to wireless, you may lose internet connectivity from within the VM [which is only a problem if you are also using the VM as your productivity OS, as I usually do]. And 2) Bridged adapters can be finicky. Sometimes it "just works", but if you have problems, troubleshooting can be pretty messy. A better solution is to configure the VM with two adapters: A) NAT [for internet access from the VM which will work no matter what physical adapter is providing it], and B) Host-only, configured with a static IP address, no DNS or gateway, virtio adapter, and with promiscuous mode. Only your local machine will be able to access the VM's CIFS shares. It's not trivial to get this solution set up, but once you do it's basically magic.

Good luck!