As Antonis said, the number of connections has little relation to the number of databases.
In general, the number of connections aren't something to worry about. The MongoDB drivers keep the connections alive to reuse them and to prevent the overhead of setting up new connections.
However, each connection is provided with about 1MB of stack server side. Unnecessary connections might eat up precious RAM, which is used by MongoDB to for the indices and as much of the working set as possible in order to speed things up.
In case you have enough RAM on your server, you have nothing to worry about – just adjust your alert thresholds to more suitable numbers. If you have a lot of page faults, however, you should investigate a bit further.
Since you are using connection pooling, it is safe to assume that you either more concurrent connections on your application than your MongoDB server (hardware, that is) can handle, or you are opening unnecessary connections.
High number of concurrent connections
As a rule of thumb, your MongoDB should be able to handle as many connections as you have concurrent request. In order to give you a decent amount of time to scale out when you reach the server's limits, your alert should trigger at about 80% utilization. For example, let's assume your server can handle about 1500 connections easily, your alert go off at
1500 * 0.8 = 1200 connections
If your server gets into problems with the 1000 connections you mentioned or when you hit 80% utilization, you should first scale up , for example by putting more RAM into the machine or – more generally speaking – eliminate the limitation which prevents the server from handling this number of connections. Which point to scale up to is not easy to determine, but generally speaking, you want to scale up as long as you get more bang than you have to put bucks into it.
There is a point where the bang you get for each buck you put in decreases drastically – of course you want to stop scaling up a bit earlier. Now what can you do in case your server still does not meet your requirements? The answer is to scale out, which in MongoWorld means setting up a sharded cluster. A word of warning: While creating a sharded cluster is no rocket science, there are quite some caveats and pitfalls. Make sure you have read the documentation about sharding thoroughly before implementing a sharded cluster. A good consultant usually is worth the money, too.
That being said: Usually it is the application server which first reaches the limit of concurrent users it can handle, so have a close look there.
Multiple open connections per concurrent user
Usually, you request a connection from the pool by doing your stuff on the reused db
object and the connection is returned transparently to the pool after the stuff is done (simplified, but should be sufficient in this context). The pool is handled by the client transparently. Each time you use MongoClient.connect
, a connection pool is created. You should only call this method once per application and reuse it. Doublecheck if you follow the described pattern.
Conclusion
- Make sure you reuse the
db
object returned by MongoClient.connect
- Find out your number of concurrent users.
- Check wether the number of connections made to the replica set is much higher than the number of concurrent users. If not, everything is working as expected.
- If the numbers roughly match and you are experiencing problems from the database side (long response times, high latency), either scale up or out after you made sure that it is not your application slowing the responses down.
Because in at normal situation all traffic (read, write) goes to the primary node, it is the busiest node at replica set. Secondaries just replicate changes (update, insert, delete) and not responding to client queries.
But check your I/O. iostat -mx 1
what are %iowait, %util. iotop
program shows how much you actually read and write to disk. Do you know how many IOPS your disk system can server? MongoDB is very IOPS centric, if mongod cannot have "enough" IOPS, it is going to be "slow". Especially secondaries can start "lagging" if they cannot write disk fast enough. That you can see from the primary with rs.printSlaveReplicationInfo()
command. Secondaries SHOULD stay under 2 seconds behind.
Best Answer
As per MongoDB blog documentation from
Asya Kamsky
here CPU load is almost never the bottleneck/limiting resource for MongoDB (or databases in general).Unless you are running a large number of MapReduce jobs and/or aggregation framework queries, high CPU utilization tends to indicate that you have poorly tuned queries possibly with in-memory sorts (as opposed to indexes supporting sorting by reading documents in correct order).
Note: write heavy load means that rather than worrying about using available disk drive space you should be concerned about available disk IO bandwidth.
To
View Metrics
related toAlerts and Monitoring
here. The serverStatus command returns a document that provides an overview of the database’s state. Monitoring applications can run this command at a regular interval to collect statistics about the instance.For example, the following operation suppresses the repl, metrics and locks information in the output.
Note: The output fields vary depending on the version of MongoDB, underlying operating system platform, the storage engine, and the kind of node, including mongos, mongod or replica set member.
For the serverStatus output specific to the version of your MongoDB, refer to the appropriate version of the MongoDB Manual.
And the dbStats command returns storage statistics for a given database. The command takes the following syntax:
where scale is optional and defaults to
1
.For example
The scale argument allows you to specify how to scale byte values. For example, a scale value of
1024
will display the results in kilobytes rather than in bytes:For your further ref here and here