There are a couple of separate points here, but I don't think how MongoDB stores data in RAM is really relevant here - MongoDB just uses the mmap()
call and then lets the kernel take care of the memory management (the Linux kernel will use Least Recently Used (LRU) by default to decide what to page out and what to keep - there are more specifics to that but it's not terribly relevant).
In terms of your issues, it sounds like you might have had a corrupt index, though the evidence is somewhat circumstantial. Now that you have done a repair (the validate() command would have confirmed/denied beforehand), there won't be any evidence in the current data but you may find more evidence in the logs, particularly when you were attempting to recreate the index, or using the index in queries.
As for the spikes in the page faults, btree stats, journal, lock percentage, and average flush time, that has all the hallmarks of a bulk delete that causes a lot of index updates, and causes a large amount of IO. The fact that mapped memory drops off later in the graphs would suggest that once you ran the repair the storage size was significantly reduced, which usually indicates significant fragmentation (bulk deletes, along with updates that grow documents are the leading causes of fragmentation).
Therefore, I would look for a large delete operation logged as slow in the logs - it will only be logged once complete, so look for it to appear after the end of the events in MMS. One of the quirks of not running in a replica set is that a bulk operation like this is relatively non-obvious - it shows up as a single delete operation in the MMS graphs (usually lost in the noise).
These bulk delete operations usually tend to be run on older data that has not been recently used and has hence been paged out of active memory by the kernel (LRU again). To delete it you must page it all the data back in, then flush the changes to disk, and of course deletes require the write lock, hence the spikes in faults, lock percentage etc.
To make room for the deleted data, your current working set is paged out, which will hit performance on your normal usage until the deletes complete and the memory pressure eases.
FYI - when you run a replica set, bulk ops are serialized in the oplog
and hence replicated one at a time - as such you can track such operations by their footprints in the replicated ops stats of the secondaries. This is not possible with a standalone instance (without looking in the logs for the completed ops) and other secondary indications.
As for managing large deletes in the future, it is generally far more efficient to partition your data into separate databases (if possible) and then drop the old data when it is no longer needed by simply dropping the old databases. This requires some extra management on the application side but it negates the need for bulk deletes, is far quicker to complete, limits fragmentation, and dropped databases also remove the files on disk, preventing excessive storage use. Definitely recommended if possible with your use case.
First off, to correct the comment: the profiler does not write to the local database, it writes to whatever database you are profiling in the system.profile
collection. So if you were reading from database foo
and had profiling turned on, the collection would be foo.system.profile
. You could turn on profiling for the local database, and hence have the profiler write there of course, but it is not a default of any kind.
Then, if you have the profiler enabled, regardless of where it writes too, all writes require a write lock whether they are to the system.profile
collection or anywhere else. Just because it is getting input internally does not exclude the system.profile
collection from following the same rules as any other collection on a database.
If you turn the profiler off I would expect the lock to disappear. There may still be the occasional blip in the global lock percentage in the stats but that is essentially just background noise.
Best Answer
MMS charts have multiple quantities you can add to determine the number of reads and the number of writes - at a glance they are:
Reads:
Writes:
As a general ballpark value, the ratio of these is fine.
It's not quite this simple though because there are other read and write "loads" on the system:
find
) may be a fast, indexed query or a slow collection scan of millions of documents.Hope this helps, as in general you usually care about not exact absolute ratio of reads to writes, but rather how that ratio may be changing over time.