This is basically the rolling upgrade procedure from the docs, with the only difference being that you will temporarily be going up to a 4 member replica set based on what you have described. Hence it's basically a viable strategy if you have concerns about 3.0.
Couple of things I will note:
First, the priority zero step shows good caution but is not really gaining you anything. There is no realistic way for that node to become primary before it catches up (not considered up to date, and if the other two members fail there will not be enough votes to elect it primary with 2 out of 4). There's no harm in doing it, it just doesn't add much in a 4 member set.
The other thing that I will note is that the addition of a new set member is not actually needed. You can stop your current secondary and simply update its binaries to 3.0, restart it and it is good. Do the same for the arbiter, then step down the primary and repeat. No resync is then necessary because 3.0 is fine as a straight drop-in replacement for 2.6.
I check on same version on windows server with mongodb 3.4.6. Please find below scripts.
1) run shard servers
"C:\Program Files\MongoDB\Server\3.4\bin\mongod.exe" --shardsvr --replSet rs0 --dbpath "C:\Mongodb_Databases\MongoDB_Shard\WT_mongod1" --logpath "C:\Mongodb_Databases\MongoDB_Shard\WT_mongod1\log.int1" --port 27000 --logappend
"C:\Program Files\MongoDB\Server\3.4\bin\mongod.exe" --shardsvr --replSet rs0 --dbpath "C:\Mongodb_Databases\MongoDB_Shard\WT_mongod2" --logpath "C:\Mongodb_Databases\MongoDB_Shard\WT_mongod2\log.int2" --port 27001 --logappend
"C:\Program Files\MongoDB\Server\3.4\bin\mongod.exe" --shardsvr --replSet rs0 --dbpath "C:\Mongodb_Databases\MongoDB_Shard\WT_mongod3" --logpath "C:\Mongodb_Databases\MongoDB_Shard\WT_mongod3\log.int3" --port 27002 --logappend
"C:\Program Files\MongoDB\Server\3.4\bin\mongod.exe" --shardsvr --replSet rs0 --dbpath "C:\Mongodb_Databases\MongoDB_Shard\WT_mongod4" --logpath "C:\Mongodb_Databases\MongoDB_Shard\WT_mongod4\log.int4" --port 27003 --logappend
2) config servers
"C:\Program Files\MongoDB\Server\3.4\bin\mongod.exe" --configsvr --replSet rsconfig --dbpath "C:\Mongodb_Databases\MongoDB_Shard\WT_Shardcfg0" --port 26050 --logpath "C:\Mongodb_Databases\MongoDB_Shard\WT_Shardcfg0\log.cfg00" --logappend
"C:\Program Files\MongoDB\Server\3.4\bin\mongod.exe" --configsvr --replSet rsconfig --dbpath "C:\Mongodb_Databases\MongoDB_Shard\WT_Shardcfg1" --port 26051 --logpath "C:\Mongodb_Databases\MongoDB_Shard\WT_Shardcfg0\log.cfg01" --logappend
"C:\Program Files\MongoDB\Server\3.4\bin\mongod.exe" --configsvr --replSet rsconfig --dbpath "C:\Mongodb_Databases\MongoDB_Shard\WT_Shardcfg2" --port 26052 --logpath "C:\Mongodb_Databases\MongoDB_Shard\WT_Shardcfg0\log.cfg02" --logappend
3) mongos mongo router
"C:\Program Files\MongoDB\Server\3.4\bin\mongos.exe" --configdb rsconfig/BSS-BC4-Benchmark2:26050,BSS-BC4-Benchmark2:26051,BSS-BC4-Benchmark2:26052 --logappend --logpath "C:\Mongodb_Databases\MongoDB_Shard\WT_mongos\log.mongos0" --bind_ip 127.0.0.1,10.10.180.39
4) Connect any shard instance in replica and initiate replica set using rs.initiate()
rs.initiate(
{
_id : "rs0",
members: [
{ _id : 0, host : "Mongo-Test-PC:27000" },
{ _id : 1, host : "Mongo-Test-PC:27001" },
{ _id : 2, host : "Mongo-Test-PC:27002" },
{ _id : 3, host : "Mongo-Test-PC:27003" }
]
}
);
5) Connect any config server and initiate replica set for config server using rs.initiate() - name must be different from shard replica
rs.initiate();
rs.status();
cfg = rs.conf();
cfg.members[0].priority = 3;
rs.reconfig(cfg);
rs.add({ host:"Mongo-Test-PC:26051", priority: 2});
rs.add({ host:"Mongo-Test-PC:26052", priority: 1});
rs.status();
6) Connect mongos and ser read pref and add shard.
db.getMongo().getReadPref();
db.getMongo().setReadPref('secondary');
db.getMongo().setReadPref('primaryPreferred');
db.getMongo().getReadPref();
sh.addShard("rs0/Mongo-Test-PC:27000");
sh.status();
create database shardtest
sh.enableSharding('shardtest');
use shardtest
db.createCollection("shardcollection");
Hope this will help you.
Best Answer
Yes, this is a supported approach for building indexes on replica sets. However, if your goal is to efficiently remove a large quantity of existing documents there are some caveats to be aware of as noted below.
A TTL index will not speed up removal of documents if you already have an index that supports finding expired documents: the TTL thread still needs to find & remove matching documents so will be doing similar work to a bulk remove.
I would investigate why your current bulk remove operations are slow. For example, make sure you have an optimal index in place to find documents to remove and monitor your system resources (memory, I/O, network, ...) to ensure there aren't any obvious bottlenecks.
If you have a large number of documents that are ready to be removed when the TTL index is created, this could have a significant performance impact. Bulk remove queries with a supporting index would allow more control over the impact since you can add query criteria to restrict the range of documents matching each bulk deletion.
That timing is incorrect: the TTL deletion task runs every 60 seconds. Based on an indexed date field the TTL monitor can either expire documents after a specified number of seconds has passed or expire documents at a specific clock time.
Assuming your documents have a range of expiry dates, once the initial removal of expired documents is complete a TTL index will be able to delete documents in smaller batches which will be less impactful than an infrequent bulk delete.
Prior to MongoDB 4.2, a foreground index build on a populated collection will block all other operations on the database that holds that collection. For a populated collection in a production environment you will definitely want to use either a rolling index build or a background index build. The rolling index build ensures that only one of your replica set members is building an index and allows a foreground index build to complete faster, however this approach does include some risk of that member becoming stale while running in standalone mode.
MongoDB 4.2+ uses an optimised index build process that limits the lock scope to the affected collection and only holds an exclusive lock at the beginning and end of the index build. You can still use the rolling index build approach but there is no longer a foreground vs background index build distinction.
The TTL index thread on replica set members only deletes documents when a member is in the primary state. Document deletes are replicated via the oplog so secondaries always have a consistent point of time with the current primary.
If you restart a replica set member in standalone mode, the TTL collection monitor will not be started (again, to keep the secondary state consistent).