From Wikipedia:
"commit charge is a term used in Microsoft Windows operating systems
to describe the total amount of pageable virtual address space."
"Total is the amount of pagefile-backed virtual address space in use,
i.e., the current commit charge. This is composed of main memory (RAM)
and disk (pagefiles)."
I think where you are going astray is that it does not include ALL your memory, only that which composes the virtual address space. You don't say if that 1.9GB physical is Total or available, but there is memory that is not allocated to paging, like for the OS, and other hardware.
Here is my current Process Explorer screenshot (I updated my version since your previous question), and I have limit of 6.2GB of Commit Charge, but 7GB of total memory and swap space. My current readings are similar in nature to yours: 2.4GB of Commit Charge in use, and 3.1GB of total memory.
http://en.wikipedia.org/wiki/Commit_charge
http://en.wikipedia.org/wiki/Virtual_address_space
By definition, inactive memory is memory that is ready to be paged out, and paging it out might involve writing it to swap. This is not any kind of problem or issue that should be optimized; it is in fact OS X working as designed.
Unfortunately, tech support writers are not kernel developers, and the Apple Knowledge Base support article quote is just wrong when it claims that Inactive memory is memory unused by programs. When you quit a program, all of its resident memory becomes Free; it doesn't stop over in Inactive. However, the second link to the developer site describing how memory management works is a good resource, if read fully.
There are many misconceptions about what "inactive memory" means in OS X. Contrary to the misconceptions, not all inactive memory is empty, unused, cache, or purgeable. In fact, Active memory can be cached or purgeable as well, if it has been recently accessed. Much inactive memory also contains data that cannot be simply discarded. If it were discarded, programs would crash, because the discarded pages would have contained valid data (as the quote from the OS X developer's side says,) and programs expect data they have stored in (virtual) memory to not just disappear.
Inactive memory contains the same types of data as active memory. The only difference is that OS X has noticed that some chunks of memory have not been read from or written to in a while.
The reason that OS X classifies some memory as inactive and other regions as "active" has to do with paging out. When memory runs low, you are going to have to page out some data. The question is, which data? If you page out data that a program turns out to immediately need again, it wastes time and accomplishes nothing. So you want to page out memory that a program won't immediately need to use again.
Anticipating which pages are likely to be unneeded in the future is difficult because a program can use its virtual memory however it likes and not tell the OS anything about what its plans are. But as a heuristic, most programs are "sticky" in their memory usage; if they haven't used some piece of memory in a while they are likely to continue not using that memory, and likely to continue using memory that they have recently used.
So when the OS decides to page out some data, it takes the strategy of swapping pages that haven't been used recently. This is why OS X sorts the memory that is being occupied by programs into two piles of "active" and "inactive." The above posted link to the Developer site, if read fully, tells how that process happens:
- When memory starts getting low, the OS starts going through the active memory pages, and sets a flag on each.
- If a program reads or writes to a page, the flag is cleared.
- If, after some delay, the flag is not cleared, that page gets sorted into the "inactive" pile.
- If an "inactive" page is accessed by its program, it is put back into the "active" pile.
- When memory runs out, the "inactive" pages are paged out.
Note that this sorting process to decide which memory to swap out is similar across all modern operating systems. Linux has the same two lists of active and inactive pages, as described in Understanding the Linux Virtual Memory Manager. Windows might use something a bit different with more than two classes of recency; I can't find a recent, reliable technical description at the moment. More implementations are discussed at the Wikipedia page entitled "Page replacement algorithm". The only difference with OS X was how the statistics were shown: someone decided it would be a good idea to show separate numbers for active and inactive in top
or Activity monitor. In retrospect this was probably not such a good idea (and this has changed in OS X 10.9.)
This process of setting and clearing flags and maintaining active/inactive heaps does take a little bit of processor power. For that reason, OS X doesn't do it when there is a lot of free memory. So the first programs you start up will show up as all "active" memory until free memory starts running low.
So, as you start from a blank slate, and open more and more programs, you can expect to see the following progression in Activity Monitor:
- First, there is a lot of "free" memory and very little inactive. This is because the memory flagger hasn't started running.
- As the amount of free memory drops, OS X will start running its memory flagger, and you will start to see the amount of "inactive" rising. Each bit of "inactive" was previously "active."
- When you run out of free memory, pages from the "inactive" pile will be paged out. The memory-flagger will also be running full tilt sorting out memory into active and inactive. Typically, you will see a lot of "inactive" while swap is being written to, indicating that the memory-flagger is doing what it is supposed to.
Pages must be classified as inactive before they are swapped out. That is what the quote from the Apple Developer site means when it says "These pages contain valid data but may be released from memory at any time." This is in opposition to Active pages, which will not be released until after they have been demoted to Inactive. There are various ways of releasing pages; if the page was mapped from a file and has not been modified, it can be deleted immediately and re-read on demand. Similarly if it is memory that had been previously swapped out and not modified since it was swapped in. Programs can also explicitly allocate cache and purgeable memory, to store data that can be forgotten and recreated on demand (but the reason a program would allocate cache is if it takes significant time to recreate that data.) But much of inactive memory is memory that programs have written valid data to, and paging out this data requires writing to swap.
Therefore looking at the amount of "inactive" memory in Activity Monitor, and seeing that there is a lot of inactive at the same time as the computer is writing to swap, only tells you that the system is working as designed.
There is also a confusion between inactive memory and file cache. I'm not sure why there is that confusion, because Activity Monitor already lists them under separate headings. Cache is memory used to store recent data that have been read to or written from the file system, in case they need to be accessed again. When memory is low, OS X does tend to get rid of the the cache first. If you have swap thrashing, and Activity monitor shows a big pile of cache (NOT inactive) then that would be a problem. But inactive memory is a different thing.
If in doubt, ignore the distinction between "inactive" and "active." Regard them as being one lump of "memory used by programs" and add the two numbers together. This is what every other operating system does when telling you about memory usage.
NOTE for OS X 10.9: Mavericks introduced "memory compression" which is, more or less, another layer of swap. Active pages now get classified inactive, then compressed (which might show up as Kernel memory depending on what tools you are using,) then written to swap as memory usage increases. Mavericks has also has stopped showing separate numbers for active and inactive in Activity Monitor, since it turns out not to be a useful thing to look at, especially given the misconceptions surrounding it.
Best Answer
This is actually pretty straightforward once you understand that commit charge represents only potential - yet "guaranteed available if you want it" - use of virtual memory, while the "private working set" - which is essentially the RAM used by "committed" memory - is actual use, as is pagefile space. (But this is not all of the use of RAM, because there are other things that use RAM).
Let's assume we're talking about 32-bit systems, so the maximum virtual address space available to each process is normally 2 GiB. (There is no substantial difference in any of the following for 64-bit systems, except that the addresses and sizes can be larger - much larger.)
Now suppose a program running in a process uses VirtualAlloc (a Win32 API) to "commit" 2 MiB of virtual memory. As you'd expect, this will show up as an additional 2 MiB of commit charge, and there are 2 MiB fewer bytes of virtual address space available in the process for future allocations.
But it will not actually use any physical memory (RAM) yet!
The VirtualAlloc call will return to the caller the start address of the allocated region; the region will be somewhere in the range 0x10000 through 0x7FFEFFFF, i.e. about 2 GiB. (The first and last 64KiB, or 0x10000 in hex, of v.a.s. in each process are never assigned.)
But again - there is no actual physical use of 2 MiB of storage yet! Not in RAM, not even in the pagefile. (There is a tiny structure called a "Virtual Address Descriptor" that describes the start v.a. and length of the private committed region.)
So there you have it! Commit charge has increased, but physical memory usage has not.
This is easy to demonstrate with the sysinternals tool
testlimit
.Sometime later, let's say the program stores something (ie a memory write operation) in that region (doesn't matter where). There is not yet any physical memory underneath any of the region, so such an access will incur a page fault. In response to which the OS's memory manager, specifically the page fault handler routine (the "pager" for short... it's called MiAccessFault), will:
You have now "faulted" one page (4 KiB) into the process. And physical memory usage will increment accordingly, and "available" RAM will decrease. Commit charge does not change.
Sometime later, if that page has not been referenced for a while and demand for RAM is high, this might happen:
If you don't have a pagefile, then steps 3 through 5 are changed to:
The pages sit on the modified list, since there's nowhere to write their contents.
The pages sit on the modified list, since there's nowhere to write their contents.
The pages sit on the modified list, since there's nowhere to write their contents.
Step 6 remains the same, since pages on the modified list can be faulted back into the process that lost them as a "soft" page fault. But if that doesn't happen the pages sit on the modified list until the process deallocates the corresponding virtual memory (maybe because the process ends).
There is other use of virtual address space, and of RAM, besides private committed memory. There is mapped virtual address space, for which the backing store is some specified file rather than the pagefile. The pages of mapped v.a.s. that are paged in are reflected in RAM usage, but mapped memory does not contribute to commit charge because the mapped file provides the backing store: Any part of the mapped region that isn't in RAM is simply kept in the mapped file. Another difference is that most file mappings can be shared between processes; a shared page that's already in memory for one process can be added to another process without going to disk for it again (another soft page fault).
And there is nonpageable v.a.s., for which there is no backing store because it's always resident in RAM. This contributes both to the reported RAM usage and to the "commit charge" as well.
No. It has nothing to do with compression. Memory compression in Windows is done as an intermediate step, on pages that otherwise would be written to the pagefile. In effect it allows the modified page list to use less RAM to contain more stuff, at some cost in CPU time but with far greater speed than pagefile I/O (even to an SSD). Since commit limit is calculated from total RAM + pagefile size, not RAM usage + pagefile usage, this doesn't affect commit limit. Commit limit doesn't change with how much RAM is in use or what it's in use for.
It isn't that Windows is being inefficient. It's the apps you're running. They're committing a lot more v.a.s. than they're actually using.
The reason for the entire "commit charge" and "commit limit" mechanism is this: When I call VirtualAlloc, I am supposed to check the return value to see if it's non-zero. If it's zero, it means that my alloc attempt failed, likely because it would have caused commit charge to exceed commit limit. I'm supposed to do something reasonable like try committing less, or exiting the program cleanly.
If VirtualAlloc returned nonzero, i.e. an address, that tells me that the system has made a guarantee - a commitment, if you will - that however many bytes I asked for, starting at that address, will be available if I choose to access them; that there is someplace to put it all - either RAM or the pagefile. i.e. there is no reason to expect any sort of failure in accessing anything within that region. That's good, because it would not be reasonable to expect me to check for "did it work?" on every access to the allocated region.
The "cash lending bank" analogy
It's a little like a bank offering credit, but strictly on a cash-on-hand basis. (This is not, of course, how real banks work.)
Suppose the bank starts with a million dollars cash on hand. People go to the bank and ask for lines of credit in varying amounts. Say the bank approves me for a $100,000 line of credit (I create a private committed region); that doesn't mean that any cash has actually left the vault. If I later actually take out a loan for, say, $20,000 (I access a subset of the region), that does remove cash from the bank.
But whether I take out any loans or not, the fact that I've been approved for a maximum of $100K means the bank can subsequently only approve another $900,000 worth of lines of credit, total, for all of its customers. The bank won't approve credit in excess of its cash reserves (ie it won't overcommit them), since that would mean the bank might have to turn a previously-approved borrower away when they later show up intending to take out their loan. That would be very bad because the bank already committed to allowing those loans, and the bank's reputation would plummet.
Yes, this is "inefficient" in terms of the bank's use of that cash. And the greater the disparity between the lines of credit the customers are approved for and the amounts they actually loan, the less efficient it is. But that inefficiency is not the bank's fault; it's the customers' "fault" for asking for such high lines of credit but only taking out small loans.
The bank's business model is that it simply cannot turn down a previously-approved borrower when they show up to get their loan - to do so would be "fatal" to the customer. That's why the bank keeps careful track of how much of the loan fund has been "committed".
I suppose that expanding the pagefile, or adding another one, would be like the bank going out and getting more cash and adding it to the loan fund.
If you want to model mapped and nonpageable memory in this analogy... nonpageable is like a small loan that you are required to take out and keep out when you open your account. (The nonpageable structures that define each new process.) Mapped memory is like bringing your own cash along (the file that's being mapped) and depositing it in the bank, then taking out only parts of it at a time (paging it in). Why not page it all in at once? I don't know, maybe you don't have room in your wallet for all that cash. :) This doesn't affect others' ability to borrow money because the cash you deposited is in your own account, not the general loan fund. This analogy starts breaking down about there, especially when we start thinking about shared memory, so don't push it too far.
Back to the Windows OS: The fact that you have much of your RAM "available" has nothing to do with commit charge and commit limit. If you're near the commit limit that means the OS has already committed - i.e. promised to make available when asked for - that much storage. It doesn't have to be all in use yet for the limit to be enforced.
Well, I'm sorry, but if you're running into commit limit, there are just three things you can do:
Re option 2: You could put a second pagefile on a hard drive. If the apps are not actually using all that committed memory - which apparently they're not, since you're seeing so much free RAM - you won't actually be accessing that pagefile much, so putting it on a hard drive won't hurt performance. If the slowness of a hard drive would still bother you, another option is to get a small and therefore cheap second SSD and put your second pagefile on that. The one "showstopper" would be a laptop with no way to add a second "non-removable" drive. (Windows will not let you put pagefiles on removeable drives, like anything connected with USB.)
Here is another answer I wrote that explains things from a different direction.
p.s.: You asked about Windows 10, but I should tell you that it works the same way in every version of the NT family, back to NT 3.1, and prerelease versions too. What has likely changed is Windows' default setting for pagefile size, from 1.5x or 1x RAM size to much smaller. I believe this was a mistake.