What I understand about 32-bit OS is, the address is expressed in 32 bits, so at most the OS could use 2^32 = 4GB memory space
The most that the process can address is 4GB. You are potentially confusing memory with address space. A process can have more memory than address space. That is perfectly legal and quite common in video processing and other memory intensive applications. A process can be allocated dozens of GB of memory and swap it into and out of the address space at will. Only 2 GB can go into the user address space at a time.
If you have a four-car garage at your house, you can still own fifty cars. You just can't keep them all in your garage. You have to have auxiliary storage somewhere else to store at least 46 of them; which cars you keep in your garage and which ones you keep in the parking lot down the street is up to you.
Does this mean any 32-bit OS, be it Windows or unix, if the machine has RAM + page file on hard disk more than 4GB, for example 8GB RAM and 20GB page file, there will never be "memory used up"?
Absolutely it does not mean that. A single process could use more memory than that! Again the amount of memory a process uses is almost completely unrelated to the amount of virtual address space a process uses. Just like the number of cars you keep in your garage is completely unrelated to the number of cars you own.
Moreover, two processes can share non-private memory pages. If twenty processes all load the same DLL, the processes all share the memory pages for that code. They don't share virtual memory address space, they share memory.
My point, in case it is not clear, is that you should stop thinking of memory and address space as the same thing, because they're not the same thing at all.
if this 32-bit OS machine has 2GB RAM and 2GB page file, increasing the page file size won't help the performance. Is this true?
You have fifty cars and a four-car garage, and a 100 car parking lot down the street. You increase the size of the parking lot to 200 spots. Do any of your cars get faster as a result of you now having 150 extra parking spaces instead of 50 extra parking spaces?
OS X has three problems which contribute to this:
By default, any data written to or read from disk is cached in RAM at a higher priority than recent program data. Applications can disable this on a per-descriptor basis with the F_NOCACHE
option to fcntl()
, but few do. As a result, large amounts of disk activity cause memory that isn't being used at that very moment to be swapped out. That creates more disk activity both for the swapping out and for reading that memory back in moments later, on top of the original disk activity.
HFS+ does not handle concurrent file access well. In particular, opening and closing many different files at once creates tremendous contention and pretty much only one open/close operation can happen at a time.
Lots of OS X applications spread their disk access across lots of little files.
As a result, when two or more applications are trying to access a lot of files at once, the disk I/O load increases exponentially as swap activity competes with the applications for I/O.
Disabling the dynamic pager might prevent the early part of that exponential curve by removing the ability to push private/dirty application pages to disk. Instead, the system will likely scavenge pages from public/clean mapped files (executables, libraries, etc.) and from the cached file data that probably should not have been cached in the first place. Whether or not this actually improves performance would depend heavily on what applications you are using. Safari, for example, is extremely bad about managing its disk I/O so I imagine this would help.
The problem would occur if the amount RAM needed actually exceeds the amount available: a panic crash is a very abrupt way to end your day. But if you are not editing large files or otherwise doing inherently memory intensive things, this might be rare enough to consider risking.
By the way, you can use the lsof
command to see what files are opened by what processes, and the fs_usage
command to see a running log of file operations. Both work better when run as root or via sudo
.
Best Answer
You can perform any operations you could perform with 164GB of RAM. But because the SSD is hundreds of times slower than RAM, it will take much longer.
You will want to turn the system's swappiness up so that you can use the faster SSD swap to effectively extend the size of the page cache. Otherwise, the system will assume swap and disk are about as fast and not move things into swap when it can read them from disk, which won't make sense in your unusual situation.
If you find you have a lot of disk I/O and very little swap being used, turn the swappiness up. If you find you're "churning" the SSD, turn the swappiness down. Note that this will have some minor negative effect on the life of your SSD -- the higher the swappiness, the more the effect. (With modern SSDs, it almost doesn't matter. There's no point in having an SSD if you don't use it.)