There is no priority system currently for writes (or reads, though you can send reads to secondaries) - the closest thing you will get is yielding. For long running operations and for operations that it predicts will page in data from disk, MongoDB will yield the lock and allow other operations through, essentially interleaving operations:
If you wanted to make sure that the less important writes are throttled somewhat you could rate limit them by ensuring they are replicated W=2, REPLICAS_SAFE, or similar writes (depending on your driver). See here for the command behind such implementations on the MongoDB side - take a look at your driver docs for the relevant equivalent there.
http://www.mongodb.org/display/DOCS/getLastError+Command#getLastErrorCommand-%7B%7Bw%7D%7D
There would then be a slight delay as the write waits for replication out to the secondaries, allowing your other, more important writes their shot at the write lock.
http://www.mongodb.org/display/DOCS/How+does+concurrency+work
In terms of the future, with 2.2 due out shortly, you will get database level locking, so as long as your 2 different profiles/priorities are in different databases you should have no lock contention (IO/RAM contention may still exist, of course).
Finally, in terms of other things to look at, for the line by line type read, I would look at capped collections and tailable cursors - see if they fit your use case:
http://www.mongodb.org/display/DOCS/Tailable+Cursors
1) Add an index will slow inserts.
2) It will speed up inserts
3) Inserts don't use indexes
4) It will speed up writes but w=0 risk consistency and j=0 risk durability
5) You can't insert data to secondaries, insert/update/delete applies only to primary
If you care for insert speed you should use hash based sharding which evenly distribute writes among shards and increase write throughput. (http://docs.mongodb.org/manual/tutorial/shard-collection-with-a-hashed-shard-key/)
Best Answer
Lets consider a collection with documents like this:
If you are searching the collection:
the output is the same.
Indexes are created on the fields used in the search criteria (or filter) - for a fast search. Suppose, an index is created on the two fields like this:
After the index creation, both the above queries perform the same way, and use the above index for search in the same way. The order of the fields in the search criteria do not matter (in this case).
This query also uses the above index:
But, the following query doesn't:
That is, the order in which the fields are specified in the index matter. That is, creating an index with:
is not the same as:
Query Selectivity
The following query using the two fields, e.g.,
and the index using both the fields, the order of the fields in the index matters. This is mostly determined by a factor called as query selectivity.
Query selectivity determines that the first field of the index filters a large set of documents, so that the following index fields have least to select from. For example, if there are 1 million documents in the collection, and there are 2000 documents with "doe" as
lastname
, then the query is selective with the index{ lastname: 1, firstname: 1 }
. Suppose, on the same data set and query, there is the index on{ firstname: 1, lastname: 1 }
and there are 250,000 documents with "john" asfirstname
, it is not a very selective one (as there needs further search of 250, 000 documents for lastname "doe", and this is not very performant).In general, the queries with
$ne
and$nin
are considered not very selective.How to find out if a query is using an index or not, or using the right index?
You can use the explain method on the query, and it generates a query plan for that query. The query plan tells if the query is using an index or not, or if there are multiple indexes which one of them is being used, or no index is being used at all. Also, there are options to see the other information like the amount of time the query takes using the index, etc.