Mongodb – Must data be evenly distributed among Mongo shards

mongodbsharding

According to this article: https://www.mongodb.com/blog/post/on-selecting-a-shard-key-for-mongodb. Assuming I have my schema as:

{
    time_posted: ...,
    userid: ...,
    content: ...
}

The author mentions that if I frequently query the latest 10 articles from a certain user, I should set a compound key of time_posted and user_id.

But in my case, I only need to query the latest 10 articles from all users. Therefore I think I only need to set a single key time_posted. I observed that by doing so the data are mostly inserted into 1 shard, and so Read Locality is improved, which is the desired effect.

However, according to https://docs.mongodb.org/v3.0/core/sharding-balancing/, Mongo will try to balance the data evenly among all the shards.

Balancing is the process MongoDB uses to distribute data of a sharded
collection evenly across a sharded cluster. When a shard has too many
of a sharded collection’s chunks compared to other shards, MongoDB
automatically balances the chunks across the shards.

So my question are: must the data be evenly distributed among Mongo shards? and can I turn off the rebalancer since I don't need my data to be evenly distributed?

Best Answer

So even if you have 1M users, only 10 articles must be kept? I have a strong feeling that your data model has this or that flaw.

Does sharding a collection with 10 documents (give and take) make sense?

If you only need 10 documents in a collection, you don't need to shard it, since it is not going to be balanced anyway, unless the documents are exceeding around 4.8Mb in size. This size computes like this:

  1. maxChunkSize/number of your documents is the size pre document when you chunk gets split latest, equalling 6.4Mb per document
  2. In order to cross the migration threshold, which is only 2 for collections with less than 20 documents, we need to have your chunk split at least once to trigger the migration
  3. Since the chunk split may be already triggered when the chunk reaches half of it's max size resulting in a split when all documents are slightly over 3.2Mb, we add the median of the difference between half of the chunks max size and and it's max size (silently assuming that a chunk is guaranteed to be split at it's max size)

But sharding this collection would not make sense the first place, as , assuming the 10 documents are a hard limit, the collection's max size can only be 160Mb as per MongoDB's BSON size limit

Do I need to have my data balanced?

Let us find out wether it is a good idea to have a bad shard key and a disabled balancer. First, if you disable the cluster balancer, this affects all sharded collections. Let us take the user collection as an example:

  1. We have 1M users
  2. Their user id is stored in _id, starts with 1 and each new user's _id is a simple increment
  3. We have two shards
  4. We disable the balancer.

Now, you shard the collection from the start. What happens internally is that there are two chunks created. In the first chunk, created on the first shard, the _id from -∞ to _id < 0 are stored. In the second chunk, created on the second shard, the _ids from 0 to +∞ are stored. Now here comes the thing: since our _id increments from 1 for each user, there is never a single user stored in the first chunk and subsequently not on the first shard. No disk space of the first chunks utilized and – with more immediate importance – no RAM, too. Since indices are (tried to be) kept in RAM along with the recently used data (called working set) amongst other things to speed operations up, sooner or later we will have the situation that the RAM on the first shard is rather empty, while the second shard will start to evict data set items or even indices out of RAM. Bad idea, huh? Now things get worse: Since we have disabled the balancer, the cluster can do nothing to mitigate the situation.

Now let's assume we were slightly smarter and have pre-split our chunks so that the _id for our users collection are distributed in a way where the _ids ranging from -∞ to 500,000 are stored on the first shard and the rest on the second. It is obvious that this is only a temporary solution, since when we exceed 1M users, the whole problem starts again. And without the balancer running, the cluster still can not mitigate this situation.

Taking this a step further: We have found out that we can use a hash sum of our _id as our shard key. Jay! Problem solved! Except it isn't. In theory, the hash algorithm should cause our users to be evenly distributed among our shards. But there is a little thing called variance.

It can be easily demonstrated like this (heavily simplified for the sake of shortness): When tossing a coin 20 times, in theory you should get an equal number of heads and tails, since the probability of either of them is 50% right? Try it. Now. The odds are very low that you will have an equal amount of heads and tails.

How does this translate to our problem? Well, chances are high that either the first or the second shard get's slightly more documents than the other over time. And this sooner or later will add up to a point where it becomes a problem – RAM and disk space is underutilized on one shard and over utilized on the other. Again. And again, the balancer can not help us to mitigate the problem.

So this should have made it crytsal clear that you should have your data balanced and that you should never have the balancer disabled by default (there are some administrative tasks during which you should or must have the balancer disabled).

Conclusion

For some 10 documents, no matter how big they are, you don't even need to shard the collection.

Disabling the balancer might cause severe problems with your cluster. Unless you absolutely have to or you are absolutely positively sure that you can live with the consequences, do not disable it.

Please note that I left out some more complicated topics, like hard drive IO bandwidth bottlenecks, network bandwidth distribution and alike for the sake of readability and – as funny as this might sound – shortness.