Mongodb crash after WiredTiger Cannot allocate memory error

mongodbmongodb-3.2Ubuntuwiredtiger

I am using MongoDB (3.2.6) as storage for my application and since a couple of weeks MongoDB started to crash usually every 1 or 2 weeks. The crash seems to come from an issue during WiredTiger memory allocation here is the full stacktrace of the last crash:

2016-10-09T18:25:36.389+0200 E STORAGE  [thread1] WiredTiger (12) [1476030336:325695][993:0x7f559c19a700], file:index-1108-8065571460375661294.wt, WT_SESSION.checkpoint: memory allocation of 18624 bytes failed: Cannot allocate memory
2016-10-09T18:25:36.547+0200 E STORAGE  [thread1] WiredTiger (12) [1476030336:422567][993:0x7f559c19a700], file:index-1108-8065571460375661294.wt, WT_SESSION.checkpoint: checkpoints cannot be dropped when in-use: Cannot allocate memory
2016-10-09T18:25:37.347+0200 I COMMAND  [conn10793] command admin.$cmd command: ping { ping: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:37 locks:{} protocol:op_query 208ms
2016-10-09T18:25:37.386+0200 E STORAGE  [thread1] WiredTiger (12) [1476030337:254969][993:0x7f559c19a700], checkpoint-server: checkpoint server error: Cannot allocate memory
2016-10-09T18:25:38.448+0200 E STORAGE  [thread1] WiredTiger (-31804) [1476030337:484838][993:0x7f559c19a700], checkpoint-server: the process must exit and restart: WT_PANIC: WiredTiger library panic
2016-10-09T18:25:38.677+0200 I -        [conn10894] Fatal Assertion 28559
2016-10-09T18:25:38.994+0200 I -        [WTJournalFlusher] Fatal Assertion 28559
2016-10-09T18:25:38.994+0200 I -        [TTLMonitor] Fatal Assertion 28559
2016-10-09T18:25:40.978+0200 I -        [thread1] Fatal Assertion 28558
2016-10-09T18:25:41.207+0200 I -        [thread1] 

The server it's running on has 7Gb RAM and is exclusively used for Mongo.

free -m
             total       used       free     shared    buffers     cached
Mem:          6811       6140        670          0         28        634
-/+ buffers/cache:       5478       1332
Swap:         2047       1009       1038

Several dbs are running on it with a total size around 925MB while WiredTiger cache status is :

db.serverStatus().wiredTiger.cache
"bytes currently in the cache" : 969927022,
"bytes read into cache" : 800513540,
"bytes written from cache" : 1349774879,
"maximum bytes configured" : 3221225472, 
[...]

According to the documentation, WiredTiger should use by default

60% of RAM minus 1 GB, or […] https://docs.mongodb.com/manual/faq/storage/#to-what-size-should-i-set-the-wiredtiger-cache

Based on 7GB RAM we get ~ 3.2Gb so it seems good.

Now how can 925Mb databases can bust a 3.2Gb cache ? I know that we have to take the indexes into account but even by indexing every fields (which is not the case) it should not fill the 2+ Gb remaining ? Last crashed occurred yesterday so the provided stats are just after a fresh restart. Could it be that the way I'm using Mongo a memory leak occurs somewhere slowly increasing the memory consumption ? I may completely miss the point as I am new to Mongo so any leading advice or article would be a relief.

I found this issue : https://jira.mongodb.org/browse/SERVER-24408 but it does not provide any further explanations.

EDIT 12/10:

3 days later the cache status is the following :

"bytes currently in the cache" : 1319874144,
"bytes read into cache" : 1054001990,
"bytes written from cache" : 8936911271,

Best Answer

Potentially the unix box is forking the process from your application and causing a duplicate process to try to allocate the same amount of memory but crashing when it does so.