Mongodb – Backing up mongo database of about 120 GB size

mongodb

We have a mongo database of about 120 GB size. I have run mongodump using nohup and redirecting the logs to /dev/null about 3 days back, but the dump file is ~40GB in size now, and the dump is still running. Is this expected?

If yes, what is the approximate compression ratio for a mongo database? i.e. for a 120 GB database, how much is the backup file size going to be?

This would help me in estimating the time remaining for the dump to finish. I have no clue why it is taking up so much time, also, wanted to know if there is a faster/better way of backing up the mongo database (remote copy is not something i'm considering)?

We are running this on the live system, but are currently not using this database. So, effectively, mongodump is the only client that mongod is serving.

Best Answer

Answers originally left in comments:

Compressed size will depend on how compressible your data patterns inside your documents and collections are. Mongo isn't going to backup the indexes so subtract whatever indexes you have from that 120GB and that's what it will backup, along with any profiling data you have saved in the DB itself. There's a lot to consider when backing up mongo. V3 and 2.8 backup methods are similar. – Ali Razeghi

If you're not using the database, you could just stop the mongo service, and make a copy of the data directory - if you update on the same server. You stop mongo service, read upgrade info on mongo website. And start your new mongo, it will use your existing data directory.

If you do the same on another server, it's like an upgrade. In your script, save the mongo version, and install the same mongo version:

sudo apt-get install -y 
    mongodb-org=3.0.8 mongodb-org-server=3.0.8 mongodb-org-shell=3.0.8 
    mongodb-org-mongos=3.0.8 mongodb-org-tools=3.0.8

...and run an upgrade afterwards. Always check upgrade info on the mongo website! – aldwinaldwin