Mongodb – I found the normal ULimit setting for mongo db is as follows:

mongodb

I found the normal ULimit setting for mongo db is as follows:

    -f (file size): unlimited
    -t (cpu time): unlimited
    -v (virtual memory): unlimited [1]
    -n (open files): 64000
    -m (memory size): unlimited [1]
    -u (processes/threads): 32000

Just wondering why the "-u", recommendation is such high..

http://docs.mongodb.org/manual/reference/ulimit/

How many process do mongo generally spawn for each CRUD operations?

Best Answer

It's not a matter of the number of operations you will be doing, rather the number of connections and (possibly, though it would be unusual for this to play a big part) the number of server side javascript operations you plan to do (Map Reduce, mainly).

To explain: there will be a thread (and a file descriptor) for each connection made to/from the mongod process (similar for mongos) - therefore it is generally a good idea to have both values set beyond the hard coded 20,000 limit in MongoDB. You can see this if you run htop or something like this command while you spin up new connections to the mongod or mongos processes:

ps uH p <PID_OF_U_PROCESS> | wc -l

Most users will never get anywhere near these maximum levels, so this is merely a precaution on most systems to avoid problems with low ulimits. In a large cluster with many mongos processes you may see levels approaching this, but unless you are planning that level of deployment you will not have to worry.

For more information on the Map Reduce side of things, there is an excellent article on it, which includes thread use here.