MongoDB- could not find member to sync from

mongodb

Node logs under Recovery mode, even after restarting:

    2018-04-24T12:37:52.915+0530 I REPL [replication-0] We are too stale to use J-DB-02:27017 as a sync source. Blacklisting this sync source because our last fetched timestamp: 5ade01e0:66 is before their earliest timestamp: 5aded094:198 for 1min until: 2018-04-24T12:38:52.915+0530 2018-04-24T12:37:52.916+0530 I REPL [replication-0] could not find member to sync from 2018-04-24T12:37:52.916+0530 E REPL [rsBackgroundSync] too stale to catch up -- entering maintenance mode 2018-04-24T12:37:52.916+0530 I REPL [rsBackgroundSync] Our newest OpTime : { ts: Timestamp 1524498912000|102, t: 68 } 2018-04-24T12:37:52.916+0530 I REPL [rsBackgroundSync] Earliest OpTime available is { ts: Timestamp 1524551828000|408, t: 68 } 2018-04-24T12:37:52.916+0530 I REPL [rsBackgroundSync] See http://dochub.mongodb.org/core/resyncingaverystalereplicasetmember 2018-04-24T12:37:52.916+0530 I REPL [rsBackgroundSync] going into maintenance mode with 0 other maintenance mode tasks in progress 2018-04-24T12:37:52.916+0530 I REPL [rsBackgroundSync] transition to RECOVERING

Here is the status of replica set from primary server

    WAPRS01:PRIMARY> db.oplog.rs.stats().maxSize
    WAPRS01:PRIMARY> rs.status();
    {
            "set" : "WAPRS01",
            "date" : ISODate("2018-04-24T07:17:15.276Z"),
            "myState" : 1,
            "term" : NumberLong(68),
            "heartbeatIntervalMillis" : NumberLong(2000),
            "optimes" : {
                    "lastCommittedOpTime" : {
                            "ts" : Timestamp(1524498912, 102),
                            "t" : NumberLong(68)
                    },
                    "appliedOpTime" : {
                            "ts" : Timestamp(1524554235, 160),
                            "t" : NumberLong(68)
                    },
                    "durableOpTime" : {
                            "ts" : Timestamp(1524554235, 92),
                            "t" : NumberLong(68)
                    }
            },
            "members" : [
                    {
                            "_id" : 0,
                            "name" : "J-DB-01:27017",
                            "health" : 1,
                            "state" : 3,
                            "stateStr" : "RECOVERING",
                            "uptime" : 569,
                            "optime" : {
                                    "ts" : Timestamp(1524498912, 102),
                                    "t" : NumberLong(68)
                            },
                            "optimeDurable" : {
                                    "ts" : Timestamp(1524498912, 102),
                                    "t" : NumberLong(68)
                            },
                            "optimeDate" : ISODate("2018-04-23T15:55:12Z"),
                            "optimeDurableDate" : ISODate("2018-04-23T15:55:12Z"),
                            "lastHeartbeat" : ISODate("2018-04-24T07:17:13.532Z"),
                            "lastHeartbeatRecv" : ISODate("2018-04-24T07:17:13.493Z"),
                            "pingMs" : NumberLong(0),
                            "configVersion" : 4
                    },
                    {
                            "_id" : 1,
                            "name" : "J-DB-02:27017",
                            "health" : 1,
                            "state" : 2,
                            "stateStr" : "SECONDARY",
                            "uptime" : 299382,
                            "optime" : {
                                    "ts" : Timestamp(1524554232, 270),
                                    "t" : NumberLong(68)
                            },
                            "optimeDurable" : {
                                    "ts" : Timestamp(1524554232, 270),
                                    "t" : NumberLong(68)
                            },
                            "optimeDate" : ISODate("2018-04-24T07:17:12Z"),
                            "optimeDurableDate" : ISODate("2018-04-24T07:17:12Z"),
                            "lastHeartbeat" : ISODate("2018-04-24T07:17:13.454Z"),
                            "lastHeartbeatRecv" : ISODate("2018-04-24T07:17:13.864Z"),
                            "pingMs" : NumberLong(0),
                            "syncingTo" : "J-DB-03:27017",
                            "configVersion" : 4
                    },
                    {
                            "_id" : 2,
                            "name" : "J-DB-03:27017",
                            "health" : 1,
                            "state" : 1,
                            "stateStr" : "PRIMARY",
                            "uptime" : 613420,
                            "optime" : {
                                    "ts" : Timestamp(1524554235, 160),
                                    "t" : NumberLong(68)
                            },
                            "optimeDate" : ISODate("2018-04-24T07:17:15Z"),
                            "electionTime" : Timestamp(1524254862, 1),
                            "electionDate" : ISODate("2018-04-20T20:07:42Z"),
                            "configVersion" : 4,
                            "self" : true
                    },
                    {
                            "_id" : 3,
                            "name" : "J-DB-04:27017",
                            "health" : 1,
                            "state" : 7,
                            "stateStr" : "ARBITER",
                            "uptime" : 300361,
                            "lastHeartbeat" : ISODate("2018-04-24T07:17:14.863Z"),
                            "lastHeartbeatRecv" : ISODate("2018-04-24T07:17:14.229Z"),
                            "pingMs" : NumberLong(0),
                            "configVersion" : 4
                    }
            ],`enter code here`
            "ok" : 1
    }
    WAPRS01:PRIMARY>

db.isMaster():

WAPRS01:PRIMARY> db.isMaster();
{
        "hosts" : [
                "J-DB-01:27017",
                "J-DB-02:27017",
                "J-DB-03:27017"
        ],
        "arbiters" : [
                "J-DB-04:27017"
        ],
        "setName" : "WAPRS01",
        "setVersion" : 4,
        "ismaster" : true,
        "secondary" : false,
        "primary" : "J-DB-03:27017",
        "me" : "J-DB-03:27017",
        "electionId" : ObjectId("7fffffff0000000000000044"),
        "lastWrite" : {
                "opTime" : {
                        "ts" : Timestamp(1524559624, 465),
                        "t" : NumberLong(68)
                },
                "lastWriteDate" : ISODate("2018-04-24T08:47:04Z")
        },
        "maxBsonObjectSize" : 16777216,
        "maxMessageSizeBytes" : 48000000,
        "maxWriteBatchSize" : 1000,
        "localTime" : ISODate("2018-04-24T08:47:04.511Z"),
        "maxWireVersion" : 5,
        "minWireVersion" : 0,
        "readOnly" : false,
        "ok" : 1
}

Best Answer

As I am able to see from your rs.status() , there is secondary replica is showing in RECOVERING state. And as MongoDB jira blog After node goes into RECOVERING due to being too stale to sync from its source, it will never recover even if there is a valid sync source it could use.

As per MongoDB documentation here A replica set member becomes “stale” when its replication process falls so far behind that the primary overwrites oplog entries the member has not yet replicated. The member cannot catch up and becomes “stale.” When this occurs, you must completely resynchronize the member by removing its data and performing an initial sync.

For further your ref here , here , here, here and here