Mongodb incremental oplog Dump not working

backupcommand linedumpmongodbtimeout

I am not able to take an incremental oplog dump. I am not getting any error messages the connection happens fine and also there is a operation recorded(shows up in db.currentOp()), but still nothing happens, not sure why?

I have executed the same command on a smaller database where everything works fine no problems at all, but the same command on a larger database(talking about 10-11 billion records which is increasing day by day), does not work.

The command is mentioned below:

mongodump --host $MONGODB_HOST:$MONGODB_PORT --authenticationDatabase admin -u $MONGODB_USERNAME -p $MONGODB_PASSWORD -d local -c oplog.rs -o backup/oplogDump/$currentTime --query '{"ts":{$gt: Timestamp( 1452157469, 37)}}'

After executing this command, the entire secondory mongo machine gets stuck, i mean letrely i need to restart the machine to get mongod start running again.

Another change that i have done recently is, i have increased the nssize to 1 GB as per developers requirement, I am not sure since when this issue started or what is causing this issue, Any Help will be really appreiciated?

Best Answer

You are essentially attempting to dump the entire oplog and hence paging in all of it, not just the most recent data. That is going to cause a lot of IO, especially when your oplog is too large to fit into memory. On a smaller database it's not a problem because the oplog is relatively small.

The reason why it's dumping the full table is because I believe you are missing some zeroes at the end of your time:

> new Date(1452157469)
ISODate("1970-01-17T19:22:37.469Z")

I assume you wanted this:

> new Date(1452157469000)
ISODate("2016-01-07T09:04:29Z")

The speed this dump happens will depend on a number of factors, but mostly it will depend on how much data has been inserted into the oplog recently and how much of it is still in memory when you start the dump.