MongoDB Stuck with Jumbo chunks while exhausting a shard

migrationmongodbsharding

I'm running a shared MongoDB 2.4 cluster.
I'm currently in a process of exhausting a shard and I'm down to the last 15 chunks needs to be migrated out of 9660.

My issue is that these chunks are too big as the shard key wasn't the best one, and dumping/restoring the 2 collections that has these issues is not an option at this moment.

I've tried to manually split the chunks, but it keeps on failing with this error:
{ "cause" : { }, "ok" : 0, "errmsg" : "split failed" }

I also tried to raise the chunk size, but that didn't work out as well.

Is there another solution I might be missing?

EDIT – Added chunk information as requested – Based on AdamC function.

mongos> AllChunkInfo("db.coll1", true)
ChunkID,Shard,ChunkSize,ObjectsInChunk
db.coll1-company_name_-6727988435900812827,shard0003,148256192,248752
db.coll1-company_name_-6615369920888926287,shard0003,88895188,149153
db.coll1-company_name_-6595735839523245520,shard0003,87448696,146726
db.coll1-company_name_-6490024596613592713,shard0003,32099368,53858
db.coll1-company_name_-3125507653387230887,shard0003,89171732,149617
db.coll1-company_name_-3084465672933415339,shard0003,105551600,177100
db.coll1-company_name_-2997585070019050064,shard0003,120763904,202624
db.coll1-company_name_-2993901072357262199,shard0003,86105908,144473
db.coll1-company_name_-2543788723715016046,shard0003,123507888,207228
db.coll1-company_name_4754688535895874026,shard0003,91061648,152788
db.coll1-company_name_5062850413490905708,shard0003,118145676,198231
db.coll1-company_name_5483223634690519314,shard0003,111475244,187039
Summary Chunk Information
Total Chunks: 12
Average Chunk Size (bytes): 100206920.33333333
Empty Chunks: 0
Average Chunk Size (non-empty): 100206920.33333333

mongos> AllChunkInfo("db.coll2", true)
ChunkID,Shard,ChunkSize,ObjectsInChunk
db.coll2-company_name_-5389355967913336416,shard0003,95282040,94808
Summary Chunk Information
Total Chunks: 1
Average Chunk Size (bytes): 95282040
Empty Chunks: 0
Average Chunk Size (non-empty): 95282040

mongos> AllChunkInfo("db.coll3", true)
ChunkID,Shard,ChunkSize,ObjectsInChunk
db.coll3-company_name_-3231146661862124155,shard0003,101878644,173263
db.coll3-company_name_658930972165413978,shard0003,142545900,242425
Summary Chunk Information
Total Chunks: 2
Average Chunk Size (bytes): 122212272
Empty Chunks: 0
Average Chunk Size (non-empty): 122212272

Thanks,
Meny

Best Answer

I was able to complete the exhaust by increasing the chunk size and removing the jumbo tag from the problematic chunks.
After getting the results from AdamC function, I found out the biggest chunk of that shard was ~148 MB of size.
After figuring this one out, the first step is connect to the config database and change the max chunk size:

use config
db.settings.save( { _id:"chunksize", value: 150 } )

The second step was to clear the jumbo flag from the relevant chunks. This can be found in the MongoDB Documentation: Clear jumbo flag

Once the chunks were no longer marked as 'jumbo' the balancer had no problem migrating them to another shard.
When all of this was done, I restored the chunkSize back to 64

db.settings.save( { _id:"chunksize", value: 64 } )

As I do not want that all of my chunks will be as big. when possible, I'll correct the hashed shard key on the problematic collections to create a more divisible shard keys.

Thank you AdamC, your initial guidance set me on the correct path to solve this problem.