OK, flights.flight
is sharded but basically only has one chunk so far, and hence all current data is on one shard and so all writes can only go to one shard - you will need more before the collection can be balanced across multiple shards. Unless you pre-split (something like this) or use a hashed shard key (which you are not) then the mongos
process is what is responsible for automatically splitting the chunks.
It will attempt the first split after it has seen writes of approximately 20% of the max chunk size (default is 64MiB, so 20% is ~13MiB). It looks like this has not yet happened, or has failed for some reason in your case.
It should be noted that although the mongos
will track all the writes, it will lose its "history" if restarted. It should also be noted that the config servers must be up and working for any splits to happen. Check the mongos
logs for any evidence of split failures or similar.
If you want to kick start things manually, just pick two (or more) reasonable split points and use the splitAt() command to break up your current single chunk. You can then wait for the balancer to move things, or you can move them manually with the moveChunk() command. You can find examples of this in the answer to the question I linked above.
So even if you have 1M users, only 10 articles must be kept? I have a strong feeling that your data model has this or that flaw.
Does sharding a collection with 10 documents (give and take) make sense?
If you only need 10 documents in a collection, you don't need to shard it, since it is not going to be balanced anyway, unless the documents are exceeding around 4.8Mb in size. This size computes like this:
- maxChunkSize/number of your documents is the size pre document when you chunk gets split latest, equalling 6.4Mb per document
- In order to cross the migration threshold, which is only 2 for collections with less than 20 documents, we need to have your chunk split at least once to trigger the migration
- Since the chunk split may be already triggered when the chunk reaches half of it's max size resulting in a split when all documents are slightly over 3.2Mb, we add the median of the difference between half of the chunks max size and and it's max size (silently assuming that a chunk is guaranteed to be split at it's max size)
But sharding this collection would not make sense the first place, as , assuming the 10 documents are a hard limit, the collection's max size can only be 160Mb as per MongoDB's BSON size limit
Do I need to have my data balanced?
Let us find out wether it is a good idea to have a bad shard key and a disabled balancer. First, if you disable the cluster balancer, this affects all sharded collections. Let us take the user
collection as an example:
- We have 1M users
- Their user id is stored in
_id
, starts with 1 and each new user's _id
is a simple increment
- We have two shards
- We disable the balancer.
Now, you shard the collection from the start. What happens internally is that there are two chunks created. In the first chunk, created on the first shard, the _id
from -∞ to _id
< 0 are stored. In the second chunk, created on the second shard, the _id
s from 0 to +∞ are stored. Now here comes the thing: since our _id
increments from 1 for each user, there is never a single user stored in the first chunk and subsequently not on the first shard. No disk space of the first chunks utilized and – with more immediate importance – no RAM, too. Since indices are (tried to be) kept in RAM along with the recently used data (called working set) amongst other things to speed operations up, sooner or later we will have the situation that the RAM on the first shard is rather empty, while the second shard will start to evict data set items or even indices out of RAM. Bad idea, huh? Now things get worse: Since we have disabled the balancer, the cluster can do nothing to mitigate the situation.
Now let's assume we were slightly smarter and have pre-split our chunks so that the _id
for our users collection are distributed in a way where the _id
s ranging from -∞ to 500,000 are stored on the first shard and the rest on the second. It is obvious that this is only a temporary solution, since when we exceed 1M users, the whole problem starts again. And without the balancer running, the cluster still can not mitigate this situation.
Taking this a step further: We have found out that we can use a hash sum of our _id
as our shard key. Jay! Problem solved! Except it isn't.
In theory, the hash algorithm should cause our users to be evenly distributed among our shards. But there is a little thing called variance.
It can be easily demonstrated like this (heavily simplified for the sake of shortness): When tossing a coin 20 times, in theory you should get an equal number of heads and tails, since the probability of either of them is 50% right? Try it. Now. The odds are very low that you will have an equal amount of heads and tails.
How does this translate to our problem? Well, chances are high that either the first or the second shard get's slightly more documents than the other over time. And this sooner or later will add up to a point where it becomes a problem – RAM and disk space is underutilized on one shard and over utilized on the other. Again. And again, the balancer can not help us to mitigate the problem.
So this should have made it crytsal clear that you should have your data balanced and that you should never have the balancer disabled by default (there are some administrative tasks during which you should or must have the balancer disabled).
Conclusion
For some 10 documents, no matter how big they are, you don't even need to shard the collection.
Disabling the balancer might cause severe problems with your cluster. Unless you absolutely have to or you are absolutely positively sure that you can live with the consequences, do not disable it.
Please note that I left out some more complicated topics, like hard drive IO bandwidth bottlenecks, network bandwidth distribution and alike for the sake of readability and – as funny as this might sound – shortness.
Best Answer
For your 7 collections on the primary shard:
Enter as administrator on the primary replica (if you shard your replica) and create your collection there by inserting 1 document or creating an index there. When you create a collection via the mongos - shard client - then the collection is started on a random shard; if you create it on the shard itself first, then you know it's on the one you want.
shard1$ mongo --port 27018 localhost/mydb --eval 'db.mycol.insert({firstdocument:'hello'})'
For the 25 collections:
Standard sharding indeed as you mention. Be patient with the distribution. The first shards will start distributing after a certain amount of data, can be 100k documents if the documents are small.
For the big collection:
Have a look at shard-tags. So, you have to tag your collection on shard1 with let's say : {status:'in_use'}; shard2 is tagged {status:'freeBeer'}. Then depending on the status in the document, the shard is chosen.
For the disk issue, I think you gonna have to write a script. Get a warning when 60% of a disk is used, and another warning at 80% to take action. Then you'll need to upgrade your disk or redistribute the chunks. Certainly the 2TB collection could give you space via the tag, to send more to shard2. Or, add a shard3 ... what is actually the purpose of sharding, to grow horizontally.
Choose shardKeys well, practice configuration and setup on a small server, practice moving chunks, adding tags, ....
Have a look at the AllChunkInfo script of Adam Comerford for you scripts and testing : https://github.com/comerford/mongodb-scripts