Left to its own devices, no, MongoDB will not move those unsharded databases to a different primary shard - the automatic balancing only applies to chunks from sharded collections.
It will round robin through your shards as the databases are created to spread them out across all the shards from that perspective. If you had one shard originally and expanded to many, the databases may have been concentrated on that shard - the round robin aspect only applies when you create the database, not the collections inside it.
Once the databases are created, and assuming you can predict what will be used and when, you can then move them to whatever shard you wish using the movePrimary command and distribute load accordingly:
http://www.mongodb.org/display/DOCS/movePrimary+Command
Naturally, this will be a quicker process if there is no data in the databases, but should still be possible after the fact.
Since there is already and answer submitted, and a useful and valid one at that, I do not want to distract from its own usefulness but there are indeed points to raise that go way beyond just a short comment. So consider this "augmentation", which is hopefully valid but primarily in addition to what has already been said.
The truth is to really consider "how your application uses the data", and to also be aware of the factors in a "sharded environment" as well as your proposed "container environment" that affect this.
The Background Case
The general take on the practice recommendation for co-locating the mongos
process along with the application instance is to obviate any network overhead required in order for the application to communicate with that mongos
process. Of course it is also "recommended practice" to specify a number of mongos
instances in the application connection string in the case where that "nearest" node should not be available for some reason then another could be selected, albeit with the possible overhead of contacting a remote node.
The "docker" case you mentions seems somewhat arbitrary. While it is true that one of the primary goals of containers ( and before that, something like BSD jails or even chroot ) is generally to achieve some level of "process isolation", there is nothing really wrong with running multiple processes as long as you understand the implications.
In this particular case the mongos
is meant to be "lightweight" and run as an "additional function" to the application process in a way that it is pretty much a "paired" part of the application itself. So docker images themselves don't have an "initd" like process but there is not really anything wrong with with running a process controller like supervisord ( for example ) as the main process for the container which then gives you a point of process control over that container as well. This situation of "paired processes" is a reasonable case and also a common enough ask that there is official documentation for it.
If you chose that kind of "paired" operation for deployment, then it does indeed address the primary point of maintaining a mongos
instance on the same network connection and indeed "server instance" as the application server itself. It can also be viewed in some way as a case where the "whole container" were to fail then that node in itself would simply be invalid. Not that I would recommend it, and in fact you probably should still configure connections to look for other mongos
instances even if these are only accessible over a network connection that increases latency.
Version Specific / Usage Specific
Now that that point is made, the other consideration here comes back to that initial consideration of co-locating the mongos
process with the application for network latency purposes. In versions of MongoDB prior to 2.6 and specifically with regard to operations such as with the aggregation framework, then the case there was that there would be a lot more network traffic and subsequent after processing work performed by the mongos
process for dealing with data from different shards. That is not so much the case now as a good deal of the processing workload can now be performed on those shards themselves before "distilling" to the "router".
The other case is your application usage patterns itself with regard to the sharding. That means whether the primary workload is in "distributing the writes" across multiple shards, or indeed being a "scatter-gather" approach in consolidating read requests. In those scenarios
Test, Test and then Test Again
So the final point here is really self explanatory, and comes down to the basic consensus of any sane response to your question. This is not a new thing for MongoDB or any other storage solution, but your actual deployment environment needs to be tested on it's "usage patterns" as close to actual reality just as much as any "unit testing" of expected functionality from core components or overall results needs to be tested.
There really is not "definitive" statement to say "configure this way" or "use in this way" that actually makes sense apart from testing what "actually works best" for your application performance and reliability as is expected.
Of course the "best case" will always be to not "crowd" the mongos
instances with requests from "many" application server sources. But then to allow them some natural "parity" that can be distributed by the resource workloads available to having at "least" a "pool of resources" that can be selected, and indeed ideally in many cases but obviating the need to induce an additional "network transport overhead".
That is the goal, but ideally you can "lab test" the different perceived configurations in order to come to a "best fit" solution for your eventual deployment solution.
I would also strongly recommend the "free" ( as in beer ) courses available as already mentioned, and no matter what your level of knowledge. I find that various course material sources often offers "hidden gems" to give more insight into things that you may not have considered or otherwise overlooked. The M102 Class as mentioned is constructed and conducted by Adam Commerford for whom I can attest has a high level of knowledge on large scale deployments of MongoDB and other data architectures. Worth the time to at least consider a fresh perspective on what you may think you already know.
Best Answer
Sharding requires a bunch of changes to the infrastructure, connections, etc. The product "Spider" makes most of that transparent.
Sharding may improve scaling and concurrency for simple queries. But for table scans and
JOINs
, it is likely to make performance worse.Partitioning is excellent for "deleting old data". See my blog. Otherwise, the
DELETE
competes with other queries, making the whole system suffer when you do the purging. (For 3 years, I wouldPARTITION BY RANGE(TO_DAYS(...))
and hav 38 partitions.)DROP PARTITION
for the old month is essentially instantaneous, regardless of size.Loading data can be a bottleneck. Do you use
LOAD DATA
? That is probably the best. Second best would be batchedINSERTs
of 100-1000 rows per batch.If you just need read scaling, then adding Slaves gives you unlimited read scaling.
If you need write scaling, a Galera-based cluster might help some, but Sharding is the only real solution.
A few Summary tables, as you allude to, is an excellent way to get performance from a Data Warehouse setup. I blog on that, too.
"I can't afford to rewrite everything for now" -- Any shortcut will take just as long. And then you will still have to "rewrite everything" to take the next step.
Manually splitting data onto different devices is not as good as RAID striping of all the drives into one large drive.
A RAID controller with Battery Backed Write Cache makes writes appear to be instantaneous, without losing data.
Normalization can be used to keep the tables smaller, hence cacheable. Are you doing a reasonable amount of that, and using reasonably small ids (eg,
SMALLINT UNSIGNED
instead ofBIGINT
where appropriate)?Slow queries -- There are too many variables to make a general statement. Let's see some specifics.
I would be happy to discuss any of these further. Since you have not said what you really need, I don't know what to focus on.