I have the following structure:
Two Ubuntu 14.04.2, both with MongoDB v3.0.4 instaled. Both in the same LAN. I want to make them work as a cluster.
In the first one, that I call "MC1", I run the three config server, mongos and mongod. In the second one, that I call "MC2", I run mongos and mongod.
MC1 is the shard1.
MC2 is the shard0.
I have a DB with 2.031GB size.
I have the following problem:
When I connect via Robomongo to MC1 mongos I can see all the data of every collection. If I connect to MC2 mongos I see the DB, the collections but the collections are empty.
Testing the balancer status I get the following results:
sh.getBalancerState()
true
db.getSiblingDB("config").collections.findOne({_id : "cluster.username"}).noBalance;
false
db.databases.find()
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "test", "partitioned" : false, "primary" : "shard0001" }
{ "_id" : "db", "partitioned" : false, "primary" : "shard0001" }
{ "_id" : "cluster", "partitioned" : true, "primary" : "shard0000" }
With this data I know that the balancer is active and the collections and DB have the balancer activated.
I do a sh.status()
too. Here you can see that all collections have data only in the shard0, any chunk haven't been migrate to the shard1. And I only can view the data connecting to MC1 mongos.
sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("55a778641066cdf8d093fe97")
}
shards:
{ "_id" : "shard0000", "host" : "172.31.37.215:27018" } #This one is MC2#
{ "_id" : "shard0001", "host" : "172.31.35.191:27018" }#This one is MC1#
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
No recent migrations
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "test", "partitioned" : false, "primary" : "shard0001" }
{ "_id" : "db", "partitioned" : false, "primary" : "shard0001" }
{ "_id" : "cluster", "partitioned" : true, "primary" : "shard0000" }
cluster.fs.chunks
shard key: { "files_id" : 1, "n" : 1 }
chunks:
shard0000 1
{ "files_id" : { "$minKey" : 1 }, "n" : { "$minKey" : 1 } } -->> { "files_id" : { "$maxKey" : 1 }, "n" : { "$maxKey" : 1 } } on : shard0000 Timestamp(1, 0)
cluster.fs.files
shard key: { "filename" : 1, "_id" : 1 }
chunks:
shard0000 1
{ "filename" : { "$minKey" : 1 }, "_id" : { "$minKey" : 1 } } -->> { "filename" : { "$maxKey" : 1 }, "_id" : { "$maxKey" : 1 } } on : shard0000 Timestamp(1, 0)
cluster.licenses
shard key: { "license" : 1 }
chunks:
shard0000 1
{ "license" : { "$minKey" : 1 } } -->> { "license" : { "$maxKey" : 1 } } on : shard0000 Timestamp(1, 0)
cluster.movusers
shard key: { "username" : 1 }
chunks:
shard0000 1
{ "username" : { "$minKey" : 1 } } -->> { "username" : { "$maxKey" : 1 } } on : shard0000 Timestamp(1, 0)
cluster.roles
shard key: { "roleId" : 1 }
chunks:
shard0000 1
{ "roleId" : { "$minKey" : 1 } } -->> { "roleId" : { "$maxKey" : 1 } } on : shard0000 Timestamp(1, 0)
cluster.sessions
shard key: { "sessionId" : 1, "_id" : 1 }
chunks:
shard0000 1
{ "sessionId" : { "$minKey" : 1 }, "_id" : { "$minKey" : 1 } } -->> { "sessionId" : { "$maxKey" : 1 }, "_id" : { "$maxKey" : 1 } } on : shard0000 Timestamp(1, 0)
cluster.webusers
shard key: { "username" : 1, "_id" : 1 }
chunks:
shard0000 1
{ "username" : { "$minKey" : 1 }, "_id" : { "$minKey" : 1 } } -->> { "username" : { "$maxKey" : 1 }, "_id" : { "$maxKey" : 1 } } on : shard0000 Timestamp(1, 0)
Also I check the shardDistribution:
db.fs.chunks.getShardDistribution()
Shard shard0000 at 172.31.37.215:27018
data : 0B docs : 0 chunks : 1
estimated data per chunk : 0B
estimated docs per chunk : 0
Totals
data : 0B docs : 0 chunks : 1
Shard shard0000 contains NaN% data, NaN% docs in cluster, avg obj size on shard : NaNGiB
How can I fix this?
Best Answer
Reproducing the situation :
Create a test-shard:
In another terminal (shard0000 on port 30000, shard0001 on port 30001, mongos on port 30999) :
Create mc1 on shard0000 and mc2 on shard0001 with some data:
Connect to mongos, and shard cluster db, mc1 collection and mc2 collection:
Check amount of documents in the shard-distribution:
Problem : No documents in mc1. See shard.status() => primary is pointing to Shard001. So only the collections in cluster-database on Shard0001 got sharded.
Solution 1:
(!!!! make sure nobody can add more records !!!!)
Solution 2 (without having to dump):
(Dropping a collection or a database will remove it from the cluster configuration automatically)
Drop 'empty'-mc1 collection from cluster (test first if empty):
Move primary to shard0000:
Checking the status:
Shard mc1 now into the collection:
Joy joy joy, that added it!!!
So, in your case (connect to mongos):