MongoDB Not Using All Available RAM

memorymongodb

I have something around 200 GB worth data stored in a mongo cluster. The physical memory on one of the instances which runs mongo is 8GB. Nothing else of any consequence runs on this instance. As near as I can understand based on Mongo's docs (like this one: http://www.mongodb.org/display/DOCS/Checking+Server+Memory+Usage) this means that the mongod process should be using about 100% of the available physical memory. But if you look at the following output from the top command you'll see that the mongod instance is only using 2GB of resident memory and theres a full 2GB of free physical memory available which isn't being used at all.

Can someone explain this behavior to me? Why is there 2GB of free memory?

top output:

top - 23:19:43 up 89 days, 20:05,  2 users,  load average: 0.41, 0.55, 0.59
Tasks: 101 total,   1 running, 100 sleeping,   0 stopped,   0 zombie
Cpu(s):  2.0%us,  1.3%sy,  0.0%ni, 93.9%id,  2.6%wa,  0.0%hi,  0.1%si,  0.0%st
Mem:   8163664k total,  6131764k used,  2031900k free,    54976k buffers
Swap: 16771848k total,    10604k used, 16761244k free,  5367700k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND                                                                                                             
 1401 mongodb   20   0  174g 2.0g 1.9g S   23 26.2  18070:55 mongod
 ...

System Info:

$ uname -a
Linux aluminum 2.6.32-31-server #61-Ubuntu SMP Fri Apr 8 19:44:42 UTC 2011 x86_64 GNU/Linux

Notes:

  • There is another instance in this cluster where mongod is behaving as I would expect and utilizing all available memory.
  • Looking at mongostat it looks like were consistently having some page-faulting, so the amount of memory used should be growing:
  • (I asked this same question on the mongodb-user google group but got no response.)

Best Answer

The resident memory size represents the number of pages in memory actually touched by the mongod process. If that is significantly lower than the available memory and data exceeds the available memory (yours does), then it could be a case of simply not having actively touched enough pages yet.

To determine if this is the case, you should run free -m, the output should look something like this:

free -m
             total       used       free     shared    buffers     cached
Mem:          3709       3484        224          0         84       2412
-/+ buffers/cache:        987       2721
Swap:         3836        156       3680

In my example, cached is not close to the total, which means that not only has mongod not touched enough pages, the filesystem cache has not yet even been filled by pages being read from the disk in general.

A quick remedy for this would be the touch command (added in 2.2) - it should be used with caution on large data sets as it will attempt to load everything into RAM even if the data is far too large to fit (causing a lot of disk IO and page faults). It will certainly fill up the memory effectively though :)

If your cached value is close to the total available, then your issue is that a large number of pages being read into memory from disk are not relevant to (and hence not touched by) the mongod process. The usual candidate for this kind of discrepency is readahead. I've already covered that particular topic elsewhere in detail, so I'll just link those two answers for future reading if necessary.