You can use the Disk Utility (from Finder: Applications -> Utilities -> Disk Utility) program to create a new Disk Image (a .dmg file), the size of your choosing, that you can mount (by double click on the .dmg file). Once mounted it shows up just like any other external disk on your Mac. You can point the emulator to this disk and tell it to store its "filesystem folder" on it.
Here are Apple's instructions for creating a disk image using Disk Utility.
I'll trade you my answer to your question for your answer to mine: What knobs have to be fiddled in /proc or /sys to keep all the inodes in memory?
Now for my answer to your question:
I'm struggling with a similar-ish issue, where I'm trying to get ls -l to work quickly over NFS for a directory with a few thousand files when the server is heavily loaded.
A NetApp performs the task brilliantly; everything else I've tried so far doesn't.
Researching this, I've found a few filesystems that separate metadata from data, but they all have some shortcomings:
- dualfs: Has some patches available for 2.4.19 but not much else.
- lustre: ls -l is a worst-case scenario because all the metadata except the file size is stored on the metadata server.
- QFS for Solaris, StorNext/Xsan: Not known for great metadata performance without a substantial investment.
So that won't help (unless you can revive dualfs).
The best answer in your case is to increase your spindle count as much as possible. The ugliest - but cheapest and most practical - way to do this is to get an enterprise-class JBOD (or two) and fiber channel card off of Ebay that are a few years old. If you look hard, you should be able to keep your costs under $500 or so. The search terms "146gb" and "73gb" will be of great help. You should be able to convince a seller to make a deal on something like this, since they've got a bunch of them sitting around and hardly any interested buyers:
http://cgi.ebay.ca/StorageTek-Fibre-Channel-2TB-14-Bay-HDD-Array-JBOD-NAS-/120654381562?pt=UK_Computing_Networking_SM&hash=item1c178fc1fa#ht_2805wt_1056
Set up a RAID-0 stripe across all the drives. Back up your data religiously, because one or two of the drives will inevitably fail. Use tar for the backup instead of cp or rsync so that the receiving single drive won't have to deal with the millions of inodes.
This is the single cheapest way I've found (at this particular historical moment, anyway) to increase IOPs for filesystems in the 2-4TB range.
Hope that helps - or is at least interesting!
Best Answer
The simple answer is no. The long answer is...
NTFS does store filenames in a case-sensitive way (NTFS can have README.txt and readme.txt in the same directory), and even the Windows filemanager can internally manage case-sensitive requests to filenames via the NtOpenFile / NtCreateFile syscalls.
Unfortunately for you, the Win32 function CreateFile (used everywhere including by fopen) will internally call NtCreateFile using the OBJ_CASE_INSENSITIVE flag which will mean that all applications which use CreateFile will see your case-sensitive filesystem case-insensitively. In practise this means that all applications will see your filesystem in a case-insensitive way regardless of whether your filesystem is actually case-sensitive under the hood.
The only way I can think for you to practically force case-sensitivity is to write a filter-driver which will remove the OBJ_CASE_INSENSITIVE flag from the incoming syscall requests which will then allow NTFS, EXT2 or whatever internal filesystem you have to behave in their default, case-sensitive way.