MongoDB with Sharding has slow performance

mongodbmongodb-3.0sharding

I have the following setup for my MongoDB Sharded Cluster (everything is running on the same machine on localhost, just different port numbers):

  • ConfigServers : 1 ReplicaSet consisting of 3 mongod instances
  • Router : 1 mongos instance
  • Shard : 1 ReplicaSet consisting of 3 mongod instances

I have logged into the Router and executed the addShard command:

  • sh.addShard("shard_rs0/localhost:27022,localhost:27023,localhost:27024");

I have not added any shard keys or enabled shardding via the "sh.enableSharding" command. I also don't have any additional shards as I want to just test using the single shard on it's own and see if the unsharded collections maintain the expected write/read speeds.

I have a .NET Web API that connects to the mongos (i.e. Router) via the CSharpDriver.

The issue I am having is that when connecting to a ReplicatSet directly, I get decent read/write speeds but when I connect via the Router, I see a significant decrease in speed. Is this because I haven't supplied a ShardKey yet or perhaps that I only have 1 shard? I would expect the unsharded collections in the Primary Shard to be unaffected by the Cluster setup but it seems there are indeed overhead costs that are slowing down the queries?

Tests Results when adding 100 000 records (using the InsertOne method on the CSharpDriver for each records). I know I could have done a batch insert but I decided to stick with the InsertOne. I did this 7 times, each time adding 100 000 records to an unsharded collection, firstly directly to the Shard itself and then via the Router. When writing via the Router, I saw the following degradation in write speeds when compared to writing to the Shard directly:

- Writing is 30% slower
- Writing is 39% slower
- Writing is 38% slower
- Writing is 42% slower
- Writing is 36% slower
- Writing is 40% slower
- Writing is 36% slower

Best Answer

First of all I heavily doubt that you config servers build a replica set. By default, config servers are three single instances, which is important to find metadata deviations. If you set up a replica set manually, this would be a single config server. And tbh, I am not even sure wether a mongos would actually switch to a secondary in a failover situation if given a replica set.

Now for the performance gain. All you mongods (at least your shard) run on localhost. And you only have a single shard. Sharding as a load distribution tool only works when the load is distributed across multiple machines. In your setup, your various mongods (6, if I am not mistaken) will battle for resources. Depending on your write concern, for example, the two secondaries may battle for disk IO when syncing data from the primary. If you have WT as your storage engine, the situation gets even more severe, as WT alone will eat up to 50% of your RAM...

I am not sure what you are trying to achieve, but if it is for testing a sharded setup, your test plan is very improvable.