The problem is that you make several wrong assumptions. Below you will find some corrections
MongoDB is not optimized for small resources
MongoDB was specifically designed to take a lot of data (recording clickstreams were the first application, iirc). As far as I can read, you have your django app on the same server as MongoDB. The problem here is that a lot of users for your django app would translate in a lot of queries/aggregations/write operations done on the MongoDB side. So django and MongoDB would have a race for resources especially during high load times. Since django is the first in the stack, it will almost always "win", for example requesting RAM which MongoDB now can not request. So it might well happen that MongoDB refuses something because of the lack of resources, the request is cancelled and your system appears to do nothing while really the two parts of your application did their best to answer the request, but failed to do so for the lack of resources.
To be honest: Running MongoDB on an 1GB instance alone would imho be not reasonable. Let alone with a django application. Imho, with this setup, you should at the very least have 4GB of RAM, and it might work until you put real load on it. For comparison: I usually suggest between 32 and 128GB of RAM per node (depending on the data, indices and a few other factors) for machines using SSDs as storage technology. Mind you, that is for MongoDB only – at an according scale of data, of course.
"Working set" does not (only) mean cache
Disclaimer: brutally simplified and terminology might be off
MMAPv1 uses memory mapped files. All the details put aside, this means that a file is treated as an addressable range of memory. So if MongoDB wants to read a certain doc, it uses a memory address and a range it wants to read. That memory address might either be already in RAM or it has to be read from disk. Or, and here is the misconception, from the OSes filesystem cache, which – you guessed it – resides in RAM, though just a different part. (Iirc, what happens in this situation is that the address a pointer refers to is changed). So, not only would MongoDB have the working set in RAM, but it would be the direct cause of quite some part of the filesystem cache. So, we have another part of MongoDB requiring even more RAM than the working set only.
The working set is not the only thing consuming memory
- Actually, the way journaling works, it doubles the RAM required by MongoDB.
- Each connection (and remember each driver basically opens a connection pool) gets 1MB of RAM allocated.
- Operations need some memory. Lets take aggregations as an example. They are capped to 100MB memory consumption – that alone would be 10% of your RAM, 5% of your allocatable memory.
However since you use MMAPv1: Do NOT turn off journaling! It is vital for crash recovery in MMAPv1.
Conclusion
Your machine is vastly underprovisioned in terms of RAM. Even if you have a tight budget, I can not stress the need of putting more RAM into that machine enough. I'd at least put 4GB into that machine (physical, that is, not swap) and see how it goes.
Be aware though, that with this setup, you'll always have django and MongoDB compete for resources the most when you need it the least: when your application has comparatively many concurrent users.
Best Answer
If you are running on Linux, you can use control groups to limit MongoDB memory as shown in the following article:
Easy Steps to Limit Mongodb Memory Usage by Ramakanta Sahoo
On Windows, a similar technique using the Windows System Resource Manager is described in:
Limit MongoDB memory use on Windows without Virtualization by Simon Green