Mongodb – Measuring MongoDB I/O and CPU performance

mongodbperformance

I am new to MongoDB and I'm experiencing this NoSQL db through a series of very simple queries, with an Oracle exported data set.

Aside measuring the execution time with primitives like db.system.profile.find(), I would like to measure I/O performance and system calls. Is there a simple, relevant, way to do this? What would be the best approach?

Also, I have noticed that the size of the collections are way much bigger than in Oracle. For example, an Oracle table (average row length x number of rows) translates to a 5x time bigger collection in MongoDB. Why? Are there any reasons for this?

Thanks for your insights.

Cheers

Best Answer

There are several tools that measure system performance like newrelic,MMS,cacti, ganglia...

Regarding the size you need to share a sample record from Oracle and a sample document from Mongo.

The obvious reasons are:

  • long field names: A collection in Mongo with N documents stores the field names N times. If the field names are too long you are wasting space.

  • powerof2 allocation strategy (only for MMAP engine): Is now the default record allocation strategy for MMAPv1. With the power of 2 sizes allocation strategy, each record has a size in bytes that is a power of 2 (e.g. 32, 64, 128, 256, 512 ... 2MB). If your document size is 33 then mongo will allocate 64 and you are wasting half your storage.

  • Compression (only for MMAP engine): Oracle is using compression while MMAP engine not

Related Question