MongoDB: co-locate the mongos process on application servers

best practicesdeploymentdockermongodbsharding

I would like to ask a question about a best practice described in this document:

http://info.mongodb.com/rs/mongodb/images/MongoDB-Performance-Best-Practices.pdf

Use multiple query routers. Use multiple mongos
processes spread across multiple servers. A common
deployment is to co-locate the mongos process on
application servers, which allows for local communication
between the application and the mongos process.The
appropriate number of mongos processes will depend on
the nature of the application and deployment.

Just a little bit of background about our deployment. We have a lot of application server nodes. Each of them runs one JVM-based process with stateless RESTful WS. As this best practice suggests, every single application server node runs its own mongos process, which means that the number of JVM processes always equals the number of mongos processes.

All mongos processes connect to 3 config servers and several mongo shards (with replica sets within each shard). Even though we are using a sharded deployment, we are not really sharding our collections. In fact we have a large number of databases which are spread across all of the shards during their creation time (and this is our main use case for sharding at the moment).

Since best practice also suggest that "The appropriate number of mongos processes will depend on the nature of the application and deployment" I started to wonder whether our usage of mongos is actually appropriate or if it would be better for us to have several dedicated mongos nodes and let our app servers connect to them without having mongos running locally.

What is your opinion on the best approach to decide how many mongos instances are appropriate in relation to the application server instance count or the size of the MongoDB cluster?

Recently we started to look into cluster management for our stateless web services, by which I mean tools like Docker, Apache Mesos, and Kubernetes. If we are using Docker, then it is generally discouraged practice to run more than one process within container. Considering this fact it becomes really hard to make sure that application server container and mongos container are always co-located on the same physical node and have equal amount of processes. This makes me wonder whether this best practice still applies for the cluster architecture I just described. If not, can you please suggest what would be the better way to locate and deploy mongos processes in this architecture?

Best Answer

Since there is already and answer submitted, and a useful and valid one at that, I do not want to distract from its own usefulness but there are indeed points to raise that go way beyond just a short comment. So consider this "augmentation", which is hopefully valid but primarily in addition to what has already been said.

The truth is to really consider "how your application uses the data", and to also be aware of the factors in a "sharded environment" as well as your proposed "container environment" that affect this.

The Background Case

The general take on the practice recommendation for co-locating the mongos process along with the application instance is to obviate any network overhead required in order for the application to communicate with that mongos process. Of course it is also "recommended practice" to specify a number of mongos instances in the application connection string in the case where that "nearest" node should not be available for some reason then another could be selected, albeit with the possible overhead of contacting a remote node.

The "docker" case you mentions seems somewhat arbitrary. While it is true that one of the primary goals of containers ( and before that, something like BSD jails or even chroot ) is generally to achieve some level of "process isolation", there is nothing really wrong with running multiple processes as long as you understand the implications.

In this particular case the mongos is meant to be "lightweight" and run as an "additional function" to the application process in a way that it is pretty much a "paired" part of the application itself. So docker images themselves don't have an "initd" like process but there is not really anything wrong with with running a process controller like supervisord ( for example ) as the main process for the container which then gives you a point of process control over that container as well. This situation of "paired processes" is a reasonable case and also a common enough ask that there is official documentation for it.

If you chose that kind of "paired" operation for deployment, then it does indeed address the primary point of maintaining a mongos instance on the same network connection and indeed "server instance" as the application server itself. It can also be viewed in some way as a case where the "whole container" were to fail then that node in itself would simply be invalid. Not that I would recommend it, and in fact you probably should still configure connections to look for other mongos instances even if these are only accessible over a network connection that increases latency.

Version Specific / Usage Specific

Now that that point is made, the other consideration here comes back to that initial consideration of co-locating the mongos process with the application for network latency purposes. In versions of MongoDB prior to 2.6 and specifically with regard to operations such as with the aggregation framework, then the case there was that there would be a lot more network traffic and subsequent after processing work performed by the mongos process for dealing with data from different shards. That is not so much the case now as a good deal of the processing workload can now be performed on those shards themselves before "distilling" to the "router".

The other case is your application usage patterns itself with regard to the sharding. That means whether the primary workload is in "distributing the writes" across multiple shards, or indeed being a "scatter-gather" approach in consolidating read requests. In those scenarios

Test, Test and then Test Again

So the final point here is really self explanatory, and comes down to the basic consensus of any sane response to your question. This is not a new thing for MongoDB or any other storage solution, but your actual deployment environment needs to be tested on it's "usage patterns" as close to actual reality just as much as any "unit testing" of expected functionality from core components or overall results needs to be tested.

There really is not "definitive" statement to say "configure this way" or "use in this way" that actually makes sense apart from testing what "actually works best" for your application performance and reliability as is expected.

Of course the "best case" will always be to not "crowd" the mongos instances with requests from "many" application server sources. But then to allow them some natural "parity" that can be distributed by the resource workloads available to having at "least" a "pool of resources" that can be selected, and indeed ideally in many cases but obviating the need to induce an additional "network transport overhead".

That is the goal, but ideally you can "lab test" the different perceived configurations in order to come to a "best fit" solution for your eventual deployment solution.

I would also strongly recommend the "free" ( as in beer ) courses available as already mentioned, and no matter what your level of knowledge. I find that various course material sources often offers "hidden gems" to give more insight into things that you may not have considered or otherwise overlooked. The M102 Class as mentioned is constructed and conducted by Adam Commerford for whom I can attest has a high level of knowledge on large scale deployments of MongoDB and other data architectures. Worth the time to at least consider a fresh perspective on what you may think you already know.