I'm using Mongo (shell version 2.6.12) in a clustered configuration. Right now I have a few collections sharded, and was looking to shard another collection. This collection already has data in it. Once I run the command to shard the collection and give it it's key, will the cluster take the existing data and spread it across the multiple systems in my cluster, or will the existing data stay put on one server and any new data will then be spread across the other systems?
MongoDB – sharding an collection with data already in it
mongodbsharding
Related Solutions
For your 7 collections on the primary shard:
Enter as administrator on the primary replica (if you shard your replica) and create your collection there by inserting 1 document or creating an index there. When you create a collection via the mongos - shard client - then the collection is started on a random shard; if you create it on the shard itself first, then you know it's on the one you want.
shard1$ mongo --port 27018 localhost/mydb --eval 'db.mycol.insert({firstdocument:'hello'})'
For the 25 collections:
Standard sharding indeed as you mention. Be patient with the distribution. The first shards will start distributing after a certain amount of data, can be 100k documents if the documents are small.
For the big collection:
Have a look at shard-tags. So, you have to tag your collection on shard1 with let's say : {status:'in_use'}; shard2 is tagged {status:'freeBeer'}. Then depending on the status in the document, the shard is chosen.
For the disk issue, I think you gonna have to write a script. Get a warning when 60% of a disk is used, and another warning at 80% to take action. Then you'll need to upgrade your disk or redistribute the chunks. Certainly the 2TB collection could give you space via the tag, to send more to shard2. Or, add a shard3 ... what is actually the purpose of sharding, to grow horizontally.
Choose shardKeys well, practice configuration and setup on a small server, practice moving chunks, adding tags, ....
Have a look at the AllChunkInfo script of Adam Comerford for you scripts and testing : https://github.com/comerford/mongodb-scripts
So even if you have 1M users, only 10 articles must be kept? I have a strong feeling that your data model has this or that flaw.
Does sharding a collection with 10 documents (give and take) make sense?
If you only need 10 documents in a collection, you don't need to shard it, since it is not going to be balanced anyway, unless the documents are exceeding around 4.8Mb in size. This size computes like this:
- maxChunkSize/number of your documents is the size pre document when you chunk gets split latest, equalling 6.4Mb per document
- In order to cross the migration threshold, which is only 2 for collections with less than 20 documents, we need to have your chunk split at least once to trigger the migration
- Since the chunk split may be already triggered when the chunk reaches half of it's max size resulting in a split when all documents are slightly over 3.2Mb, we add the median of the difference between half of the chunks max size and and it's max size (silently assuming that a chunk is guaranteed to be split at it's max size)
But sharding this collection would not make sense the first place, as , assuming the 10 documents are a hard limit, the collection's max size can only be 160Mb as per MongoDB's BSON size limit
Do I need to have my data balanced?
Let us find out wether it is a good idea to have a bad shard key and a disabled balancer. First, if you disable the cluster balancer, this affects all sharded collections. Let us take the user
collection as an example:
- We have 1M users
- Their user id is stored in
_id
, starts with 1 and each new user's_id
is a simple increment - We have two shards
- We disable the balancer.
Now, you shard the collection from the start. What happens internally is that there are two chunks created. In the first chunk, created on the first shard, the _id
from -∞ to _id
< 0 are stored. In the second chunk, created on the second shard, the _id
s from 0 to +∞ are stored. Now here comes the thing: since our _id
increments from 1 for each user, there is never a single user stored in the first chunk and subsequently not on the first shard. No disk space of the first chunks utilized and – with more immediate importance – no RAM, too. Since indices are (tried to be) kept in RAM along with the recently used data (called working set) amongst other things to speed operations up, sooner or later we will have the situation that the RAM on the first shard is rather empty, while the second shard will start to evict data set items or even indices out of RAM. Bad idea, huh? Now things get worse: Since we have disabled the balancer, the cluster can do nothing to mitigate the situation.
Now let's assume we were slightly smarter and have pre-split our chunks so that the _id
for our users collection are distributed in a way where the _id
s ranging from -∞ to 500,000 are stored on the first shard and the rest on the second. It is obvious that this is only a temporary solution, since when we exceed 1M users, the whole problem starts again. And without the balancer running, the cluster still can not mitigate this situation.
Taking this a step further: We have found out that we can use a hash sum of our _id
as our shard key. Jay! Problem solved! Except it isn't.
In theory, the hash algorithm should cause our users to be evenly distributed among our shards. But there is a little thing called variance.
It can be easily demonstrated like this (heavily simplified for the sake of shortness): When tossing a coin 20 times, in theory you should get an equal number of heads and tails, since the probability of either of them is 50% right? Try it. Now. The odds are very low that you will have an equal amount of heads and tails.
How does this translate to our problem? Well, chances are high that either the first or the second shard get's slightly more documents than the other over time. And this sooner or later will add up to a point where it becomes a problem – RAM and disk space is underutilized on one shard and over utilized on the other. Again. And again, the balancer can not help us to mitigate the problem.
So this should have made it crytsal clear that you should have your data balanced and that you should never have the balancer disabled by default (there are some administrative tasks during which you should or must have the balancer disabled).
Conclusion
For some 10 documents, no matter how big they are, you don't even need to shard the collection.
Disabling the balancer might cause severe problems with your cluster. Unless you absolutely have to or you are absolutely positively sure that you can live with the consequences, do not disable it.
Please note that I left out some more complicated topics, like hard drive IO bandwidth bottlenecks, network bandwidth distribution and alike for the sake of readability and – as funny as this might sound – shortness.
Best Answer
When you shard an existing collection, MongoDB will split the values of the shard key into chunk ranges based on data size and start rebalancing across shards based on the chunk distribution. Rebalancing a large collection can be very resource intensive so you should consider the timing and impact on your production deployment.
In MongoDB 2.6 (which reached end of life in October, 2016) the balancer will only perform one chunk migration at a time. In MongoDB 3.4 or newer the balancer can perform parallel chunk migrations with the restriction that each shard can participate in at most one migration at a time. As at MongoDB 3.6, that means that a sharded cluster with n shards can perform at most n/2 (rounded down) simultaneous chunk migrations.
A faster approach to sharding a large existing collection would be to pre-split chunks in an empty collection based on the current data distribution for your shard key, and to then dump & restore your existing data into this new sharded collection. The pre-split approach minimizes rebalancing activities by creating appropriate chunk ranges so data is distributed on insert.
There have been significant performance improvements since MongoDB 2.6, so I would strongly recommend upgrading to a supported version of MongoDB (ideally MongoDB 3.4 or newer).