According to what you found out yourself, I think I can provide a detailed explanation.
How chunk migration works (simplified)
Here is how chunk migration works (simplified for the sake of shortness):
- When chunk exceeds the configured chunkSize (64MB by default), the mongos which caused the size increase will split the chunk.
- That chunk is split on the shard the original chunk resided on. Since the config servers do not form a replica set, it has to update all three config servers.
- If the difference in the number of chunks reaches the chunk migration threshold, the cluster balancer will initiate the migration process.
- The balancer will try to acquire a balancing lock. Note that because of this there can be only one chunk migration at any given point in time. (This will become important in the explanation what happened.)
- In order to acquire the balancing lock, the balancer needs to update the config on all three config servers.
- If the balancing lock is acquired, the data will be copied over to the shard with the least chunks.
- After the data is successfully copied to the destination, the balancer updates the key range to shard mapping on all three config servers
- As a last step, the balancer notifies the source shard to delete the migrated chunk.
What happened in your case
Your data was increasing to a point where chunks had to be split. At this point, all three of your config servers were available and the metadata was updated accordingly. Chunks splitting is a relatively cheap operation in comparison to chunk migration and may happen often, so under load, you usually have a lot more chunk splits than migrations. As said before, only one chunk migration can happen at any given point in time.
Due to whatever problems, one or more of your config servers became unreachable after the chunks were split, but before enough chunks were migrated to balance them out below the chunk migration threshold, since there can only be one running chunk migration at any given point in time. Bottom line: Some time before every chunk that needed to be migrated was actually migrated, one or more of your config servers became unavailable.
Now the balancer wanted to migrate a chunk, but could not reach all config servers to acquire the global migration lock. Hence the error message.
How to deal with such situation
It is very likely that your config server were out of sync. In order to deal with this situation, please read Adam Comerfords answer to mongodb config servers not in sync. Follow it to the letter.
How to prevent
Plain and simple: use MMS. It is free, gives a lot of information about health and performance and using the automation agent, administration of a MongoDB cluster can be done a lot faster.
I tend to suggest to install at least 3 monitoring agents, so you can have a scheduled downtime on one while maintaining redundancy with the others.
MMS has alerting capabilities, so you will be notified if one of your config servers becomes unavailable, which is a serious situation.
The config servers only hold metadata about which chunk is stored on which cluster, no actual data.
There is no such thing called a "master server" in mongoDB, but I think what you are talking about is the shard router (mongos
). When that is the case, then that server doesn't store any data at all.
So the total capacity of the cluster is the sum of the capacity of your shards, which is 40GB * 5 = 200GB.
Should the mongo router become a bottleneck, you can add more of them. A cluster can be accessed by any number of routers. By the way: it's not uncommon to put the mongos process(es) on the same machines where the application(s) are running.
"Or replication across two sets of clusters?"
Replication in MongoDB is on shard-level. Each shard can (and in a production setup should) be a replica-set of two or more nodes.
Best Answer
If you login directly to that shard (and not mongos), you can drop that collection. After you have logged in to that shard, you can check with db.collection.find() command that there is no data what you want to save, before you drop it.