I am running a mongo cluster.
Chunk size set is 300 MB but for today morning it is showing me in logs that chunk size is 1024 Byte. I checked in current op there also it is showing chunks of 1024 byte.
I have checked with monos and on all config server chunk size is 300 MB.
Please help me to resolve the issue as it is all of the sudden bringing my shard set up down.
Here is the log from currentOp
{
"opid" : "shard0000:-1945000000",
"active" : true,
"secs_running" : 0,
"microsecs_running" : NumberLong(72072),
"op" : "query",
"ns" : "DB20150102.locationCount",
"query" : {
"splitVector" : "DB20150102.locationCount",
"keyPattern" : {
"articleId" : 1,
"host" : 1
},
"min" : {
"articleId" : { "$minKey" : 1 },
"host" : { "$minKey" : 1 }
},
"max" : {
"articleId" : { "$maxKey" : 1 },
"host" : { "$maxKey" : 1 }
},
"maxChunkSizeBytes" : 1024,
"maxSplitPoints" : 2,
"maxChunkObjects" : 250000
},
"client_s" : "192.168.22.106:55881",
"desc" : "conn237027",
"threadId" : "0x7c6cc55db700",
"connectionId" : 237027,
"locks" : {
"^ibeat20150102" : "R"
},
"waitingForLock" : true,
"numYields" : 14,
"lockStats" : {
"timeLockedMicros" : {
"r" : NumberLong(32978),
"w" : NumberLong(0)
},
"timeAcquiringMicros" : {
"r" : NumberLong(48048),
"w" : NumberLong(0)
}
}
},
Entry from setting
{ "_id" : "chunksize", "value" : 250 }
Error in primary shard
2015-01-02T13:04:48.386+0530 [conn237049] warning: chunk is larger
than 1024 bytes because of key { articleId: "", host: "abc.com" }
2015-01-02T13:04:48.386+0530 [conn237049] warning: chunk is larger
than 1024 bytes because of key { articleId: "0", host: "xyz.com" }
I'm seeing this in my mongos log for the same collection
2015-01-02T14:53:58.983+0530 [conn58] warning: splitChunk failed –
cmd: { splitChunk: "DB20150102.locationCount", keyPattern: {
articleId: 1, host: 1 }, min: { articleId: MinKey, host: MinKey },
max: { articleId: MaxKey, host: MaxKey }, from: "shard0000",
splitKeys: [ { articleId: "", host: "abc.com" } ], shardId:
"ibeat20150102.locationCount-articleId_MinKeyhost_MinKey", configdb:
"192.168.24.192:27017,192.168.24.54:27017,192.168.24.55:27017" }
result: { who: { _id: "DB20150102.locationCount", state: 1, who:
"ibeatdb61:27017:1420185037:1475849446:conn913:869542099", ts:
ObjectId('54a660afdc99ecfb22d83c27'), process:
"ibeatdb61:27017:1420185037:1475849446", when: new
Date(1420189871037), why: "split-{ articleId: MinKey, host: MinKey }"
}, ok: 0.0, errmsg: "the collection's metadata lock is taken" }
I have taken these steps:
- Stop read an write on servers
- Stop mongos server
- Restarted config servers.
- Restarted mongos server
- Restarted read and write.
Still the same issue is appearing
My Primary Shard is a replica set configuration is as follows :
- Primary Server : 512 GB Ram 5 TB Physical memory.
- Secondary Server 16 GB RAM 5 TB Physical Server.
- Arbiter 8 GB Ram.
My secondary shard has the same configuration in its replica set.
Best Answer
How did you change the chunk size? Did you follow this guide?
To recap, one should:
db.settings.save( { _id:"chunksize", value: <sizeInMB> } )
AFAIK,
won't bring down a shard, it will only result in a unbalanced cluster.