MongoDB Sharded Cluster on AWS reliability

awsmongodbreplicationsharding

I have setup one mongodb cluster on aws with 3 shards as replicaset, 3 mongos and 3 configs.

All the 3 EC2 Instance are t2.medium (4GB, 2CPU, 50GB EBS 150 IOPS (io1 not gp2))

Our database is only 1.65GB but on disk + replication it is around 30GB per Instance (including journarls … etc) and each shard holds around 550MB of the data

It only functions perfectly when the all instances are online!

Do you think it is because of some misconfiguration or is it because of that these instances are small, not EBS optimised.

It was much better to use standalone, as it is easier to maintain when something happen!

as shard key we used _id : hashed

is it because the mongos and configs are hosted with the mongod ?

is it because when stepping down a primary then mongodb must sync the data to the new primary which because of the network of aws, memory and IO make this slowness?

Best Answer

The issue was actually in using the T2 instance, we upgraded to M4 instances and it is running well