While I'd love to see something like ZFS available for Windows hosts, NTFS isn't a horrible filesystem. It supports most "modern" filesystem features (extended attributes, journaling, ACLs, you name it), but it's hampered by Explorer and most other apps not supporting any of these.
One thing that will absolutely kill its performance is having "too many" entries in a directory. Once you pass a couple thousand entries in one directory, everything slows to a crawl. Literally the entire machine will halt waiting for NTFS to create or remove entries when this is happening.
I used to work with an app that generated HTML-based docs for .NET assemblies; it would create one file per property, method, class, namespace, etc. For larger assemblies we'd see 20+k files, all nicely dumped into a single directory. The machine would spend a couple of hours during the build blocked on NTFS.
In theory, Windows supports filesystem plugins, which would make native ZFS, ext3 or whatever (even FUSE) possible. In practice, the APIs are undocumented, so you're completely on your own.
Now, since you're doing Java development, could you install a different OS on your machine, or use a VM on top of Windows?
Also, you might want to try some platform-independent filesystem benchmarks (iozone, bonnie... there are probably more modern ones I don't know off the top of my head, maybe even a few written in Java) to see if it's actually the filesystem holding you back, or if it's something else. Premature optimization and all that...
There are two main reasons for the performance difference, and two possible reasons. First, the main reasons:
Increased Performance of ext4 vs. NTFS
Various benchmarks have concluded that the actual ext4 file system can perform a variety of read-write operations faster than an NTFS partition. Note that while these tests are not indicative of real-world performance, we can extrapolate these results and use this as one reason.
As for why ext4 actually performs better then NTFS can be attributed to a wide variety of reasons. For example, ext4 supports delayed allocation directly. Again though, the performance gains depend strictly on the hardware you are using (and can be totally negated in certain cases).
Reduced Filesystem Checking Requirements
The ext4 filesystem is also capable of performing faster file system checks than other equivalent journaling filesystems (e.g. NTFS). According to the Wikipedia page:
In ext4, unallocated block groups and sections of the inode table are marked as such. This enables e2fsck to skip them entirely on a check and greatly reduces the time it takes to check a file system of the size ext4 is built to support. This feature is implemented in version 2.6.24 of the Linux kernel.
And now, the two possible reasons:
File System Checking Utilities Themselves
Certain applications may run different routines on filesystems to actually perform the health "check". This can easily be seen if you use the fsck utility set on Linux versus the chkdsk utility on Windows. These applications are written on different operating systems for different file systems. The reason I bring this up as a possible reason is the low-level system calls in each operating system is different, and so you may not be able to directly compare the utilities using two different operating systems.
Disk Fragmentation
This one is easy to understand, and also helps us to understand the differences between file systems. While all digital data held in a file is the same, how it gets stored on the hard drive is quite different from filesystem to filesystem. File fragmentation can obviously increase access speeds, attributing to more of a speed difference.
Best Answer
looking at http://www.tuxera.com/products/ntfs-open-source/ and the stats at http://www.tuxera.com/products/tuxera-ntfs-commercial/performance/ i do not think that you can get better speed than with tuxeras stuff.