There is no priority system currently for writes (or reads, though you can send reads to secondaries) - the closest thing you will get is yielding. For long running operations and for operations that it predicts will page in data from disk, MongoDB will yield the lock and allow other operations through, essentially interleaving operations:
If you wanted to make sure that the less important writes are throttled somewhat you could rate limit them by ensuring they are replicated W=2, REPLICAS_SAFE, or similar writes (depending on your driver). See here for the command behind such implementations on the MongoDB side - take a look at your driver docs for the relevant equivalent there.
http://www.mongodb.org/display/DOCS/getLastError+Command#getLastErrorCommand-%7B%7Bw%7D%7D
There would then be a slight delay as the write waits for replication out to the secondaries, allowing your other, more important writes their shot at the write lock.
http://www.mongodb.org/display/DOCS/How+does+concurrency+work
In terms of the future, with 2.2 due out shortly, you will get database level locking, so as long as your 2 different profiles/priorities are in different databases you should have no lock contention (IO/RAM contention may still exist, of course).
Finally, in terms of other things to look at, for the line by line type read, I would look at capped collections and tailable cursors - see if they fit your use case:
http://www.mongodb.org/display/DOCS/Tailable+Cursors
You can verify that no migrations are running by checking the balance with
sh.isBalancerRunning()
which is true
if chunks are being migrated and false
if not. Using BalancerState
only shows you if it is enabled or disabled, not its current run state. While it depends on what the specific documentation says, I'd probably feel safer setting the balancer state to false, checking the migration status with the above command, and then stopping it:
sh.stopBalancer()
So now that we have the proper method clarified,
What is the bad effect?
I'm not too sure how gracefully MongoDB would potentially handle this issue. However, you should be able to find the steps that occur during a migration in your log.
Always check the logs!
Update: As specified by the OP, you can also use sh.status()
if this work occurred in the last 24 hours to check if there are any recorded errors in migrations from the balancer. If > 24 hours, go check the logs.
Update 2: Marcus clarified in the comments that partial migrations are not possible, so this should not be a concern.
Best Answer
The locking of documents differs between the WiredTiger and the MMAPv1 storage engine. When using the Wired Tiger storage engine, write operations only hold an exclusive lock at document level. This means that multiple threads can update multiple documents in the same collection at the same time, however, they can NOT update the same document at the same time. In addition to an exclusive lock at document level, WiredTigger also holds intent locks at the global, database and collection levels. These intent locks will not block reading or writing operations,however, when the storage engine detects conflicts between two operations, one will incur a write conflict causing MongoDB to transparently retry that operation.
When using the classical 'MMAPv1` storage engine, a write operation hold an exclusive lock on the entire collection and therefore multiple threads can NOT write to the same collection at the same time, let alone write to the same document at the same time.
The locking does not occur when reading from a document, so two threads can a document at the same time. Regarding
$isolated
, you need to use this for write operations that affect multiple documents. Setting this flag prevents a write operation from yielding to other reads and writes once the first document is written. For example, if you are running anupdate
query on document 1,2,...100, setting the $isolated flag will prevent document 100 being changed while the update query is updating document 1.