Your mongo stat shows higher number of updates vs inserts. One thing that could cause high write lock issues is if your updates typically are increasing the document size and causing the document to move in the data file. We ran into this ourselves, but we were working with mongo support at the time to figure out so I don't remember what metric or stat would tell you this is the case. This would likely only be an issue if your document sizes were very large. We ended up splitting out a sub array that was always being added to into its own collection so that we were just adding new documents instead of modifying an existing one.
The usePowerOf2Sizes flag on the collection can also help alleviate this by giving the documents room for growth. This is apparently the default now on 2.6, but you would need to turn it on if you're not on 2.6 yet. Setting that is described here: http://docs.mongodb.org/manual/reference/command/collMod/
As Antonis said, the number of connections has little relation to the number of databases.
In general, the number of connections aren't something to worry about. The MongoDB drivers keep the connections alive to reuse them and to prevent the overhead of setting up new connections.
However, each connection is provided with about 1MB of stack server side. Unnecessary connections might eat up precious RAM, which is used by MongoDB to for the indices and as much of the working set as possible in order to speed things up.
In case you have enough RAM on your server, you have nothing to worry about – just adjust your alert thresholds to more suitable numbers. If you have a lot of page faults, however, you should investigate a bit further.
Since you are using connection pooling, it is safe to assume that you either more concurrent connections on your application than your MongoDB server (hardware, that is) can handle, or you are opening unnecessary connections.
High number of concurrent connections
As a rule of thumb, your MongoDB should be able to handle as many connections as you have concurrent request. In order to give you a decent amount of time to scale out when you reach the server's limits, your alert should trigger at about 80% utilization. For example, let's assume your server can handle about 1500 connections easily, your alert go off at
1500 * 0.8 = 1200 connections
If your server gets into problems with the 1000 connections you mentioned or when you hit 80% utilization, you should first scale up , for example by putting more RAM into the machine or – more generally speaking – eliminate the limitation which prevents the server from handling this number of connections. Which point to scale up to is not easy to determine, but generally speaking, you want to scale up as long as you get more bang than you have to put bucks into it.
There is a point where the bang you get for each buck you put in decreases drastically – of course you want to stop scaling up a bit earlier. Now what can you do in case your server still does not meet your requirements? The answer is to scale out, which in MongoWorld means setting up a sharded cluster. A word of warning: While creating a sharded cluster is no rocket science, there are quite some caveats and pitfalls. Make sure you have read the documentation about sharding thoroughly before implementing a sharded cluster. A good consultant usually is worth the money, too.
That being said: Usually it is the application server which first reaches the limit of concurrent users it can handle, so have a close look there.
Multiple open connections per concurrent user
Usually, you request a connection from the pool by doing your stuff on the reused db
object and the connection is returned transparently to the pool after the stuff is done (simplified, but should be sufficient in this context). The pool is handled by the client transparently. Each time you use MongoClient.connect
, a connection pool is created. You should only call this method once per application and reuse it. Doublecheck if you follow the described pattern.
Conclusion
- Make sure you reuse the
db
object returned by MongoClient.connect
- Find out your number of concurrent users.
- Check wether the number of connections made to the replica set is much higher than the number of concurrent users. If not, everything is working as expected.
- If the numbers roughly match and you are experiencing problems from the database side (long response times, high latency), either scale up or out after you made sure that it is not your application slowing the responses down.
Best Answer
RAM, mainly. Every connection gets a stack allocated, at the size of 1MB. The more connections you have, the more RAM is needed for them and the less RAM is available for keeping indices or the working set of data in RAM.
So with your 19314 connections, roughly 19GB of RAM is used for connections. That's roughly a third of your available RAM – which is too much, from my point of view. What is acceptable has very much to do with your use cases, performance needs and whatnot. Finding out an acceptable RAM utilization is out of scope of an answer and can take many hours of analysis and optimization.