MongoDB terminates when it runs out of memory

dockerlimitsmemorymongodb

I have the following configuration:

  • a host machine that runs three docker containers:
    • MongoDB
    • Redis
    • A program using the previous two containers to store data

Both Redis and MongoDB are used to store huge amounts of data. I know Redis needs to keep all its data in RAM and I am fine with this. Unfortunately, what happens is that mongo starts taking up a lot of RAM and as soon as the host RAM is full (we're talking about 32GB here), either mongo or Redis crashes.

I have read the following previous questions about this:

  1. Limit MongoDB RAM Usage: apparently most RAM is used up by the WiredTiger cache
  2. MongoDB limit memory: here apparently the problem was log data
  3. Limit the RAM memory usage in MongoDB: here they suggest to limit mongo's memory so that it uses a smaller amount of memory for its cache/logs/data
  4. MongoDB using too much memory: here they say it's WiredTiger caching system which tends to use as much RAM as possible to provide faster access. They also state it's completely okay to limit the WiredTiger cache size, since it handles I/O operations pretty efficiently
  5. Is there any option to limit mongodb memory usage?: caching again, they also add MongoDB uses the LRU (Least Recently Used) cache algorithm to determine which "pages" to release, you will find some more information in these two questions
  6. MongoDB index/RAM relationship: quote: MongoDB keeps what it can of the indexes in RAM. They'll be swaped out on an LRU basis. You'll often see documentation that suggests you should keep your "working set" in memory: if the portions of index you're actually accessing fit in memory, you'll be fine.
  7. how to release the caching which is used by MongoDB?: same answer as in 5.

Now what I appear to understand from all these answers is that:

  1. For faster access it would be better for mongo to fit all indices in RAM. However, in my case, I am fine with indices partially residing on disk as I have a quite fast SSD.
  2. RAM is mostly used for caching by mongo.

Considering this, I was expecting mongo to try and use as much RAM space as possible but being able to function also with few RAM space and fetching most things from disk. However, I limited mongo Docker container's memory (to 8GB for instance), by using --memory and --memory-swap, but instead of fetching stuff from disk, mongo just crashed as soon as it ran out of memory.

How can I force mongo to use only the available memory and to fetch from disk everything that does not fit into memory?

Best Answer

As per MongoDB BOL Here Changed in version 3.4: Values can range from 256MB to 10TB and can be a float. In addition, the default value has also changed.

Starting in 3.4, the WiredTiger internal cache, by default, will use the larger of either:

50% of RAM minus 1 GB, or
256 MB.

With WiredTiger, MongoDB utilizes both the WiredTiger internal cache and the filesystem cache.

Via the filesystem cache, MongoDB automatically uses all free memory that is not used by the WiredTiger cache or by other processes.

The storage.wiredTiger.engineConfig.cacheSizeGB limits the size of the WiredTiger internal cache. The operating system will use the available free memory for filesystem cache, which allows the compressed MongoDB data files to stay in memory. In addition, the operating system will use any free RAM to buffer file system blocks and file system cache.

To accommodate the additional consumers of RAM, you may have to decrease WiredTiger internal cache size.

For further your ref WiredTiger Storage Engine and Configuration File Options

Related Question