There is no priority system currently for writes (or reads, though you can send reads to secondaries) - the closest thing you will get is yielding. For long running operations and for operations that it predicts will page in data from disk, MongoDB will yield the lock and allow other operations through, essentially interleaving operations:
If you wanted to make sure that the less important writes are throttled somewhat you could rate limit them by ensuring they are replicated W=2, REPLICAS_SAFE, or similar writes (depending on your driver). See here for the command behind such implementations on the MongoDB side - take a look at your driver docs for the relevant equivalent there.
http://www.mongodb.org/display/DOCS/getLastError+Command#getLastErrorCommand-%7B%7Bw%7D%7D
There would then be a slight delay as the write waits for replication out to the secondaries, allowing your other, more important writes their shot at the write lock.
http://www.mongodb.org/display/DOCS/How+does+concurrency+work
In terms of the future, with 2.2 due out shortly, you will get database level locking, so as long as your 2 different profiles/priorities are in different databases you should have no lock contention (IO/RAM contention may still exist, of course).
Finally, in terms of other things to look at, for the line by line type read, I would look at capped collections and tailable cursors - see if they fit your use case:
http://www.mongodb.org/display/DOCS/Tailable+Cursors
MongoDB has built in support for geoindexing. You don't need to do the calculation yourself.
Basically, you would create a field with the lat/long stored as an array or as sub documents, something like one of these:
{ loc : [ 50 , 30 ] } //SUGGESTED OPTION
{ loc : { x : 50 , y : 30 } }
{ loc : { lon : 40.739037, lat: 73.992964 } }
Then index the new loc field appropriately:
db.places.ensureIndex( { loc : "2d" } )
Finally you can then use one of the operators to query a point for the nearest 20 results:
db.places.find( { loc : { $near : [50,50] } } ).limit(20)
You could, of course, just use MongoDB to store the data, then pull the information out of the DB with a find() and do the calculation client-side but I imagine that is not what you want to do.
If the distance part of the equation is what you want:
http://www.mongodb.org/display/DOCS/Geospatial+Indexing#GeospatialIndexing-geoNearCommand
The $geoNear operator will return the distance also. An example:
> db.runCommand( { geoNear : "places" , near : [50,50], num : 10 } );
{
"ns" : "test.places",
"near" : "1100110000001111110000001111110000001111110000001111",
"results" : [
{
"dis" : 69.29646421910687,
"obj" : {
"_id" : ObjectId("4b8bd6b93b83c574d8760280"),
"y" : [
1,
1
],
"category" : "Coffee"
}
},
{
"dis" : 69.29646421910687,
"obj" : {
"_id" : ObjectId("4b8bd6b03b83c574d876027f"),
"y" : [
1,
1
]
}
}
],
"stats" : {
"time" : 0,
"btreelocs" : 1,
"btreelocs" : 1,
"nscanned" : 2,
"nscanned" : 2,
"objectsLoaded" : 2,
"objectsLoaded" : 2,
"avgDistance" : 69.29646421910687
},
"ok" : 1
}
The "dis" : 69.29646421910687
elements are what you are looking for, there is also a spherical distance option.
For all this, how to use the distances, and more, take a look here for more information on geo indexes and how to use them:
http://www.mongodb.org/display/DOCS/Geospatial+Indexing/
Best Answer
i have used PostGIS for over a decade now and I can tell you for sure that there is no match for it in the NoSQL world.
how many rows do you have? how large is the thing? Mongo is definitely not going to make you happy. I am pretty sure you did something fishy on the PostgreSQL side to even consider using Mongo. Let us fix it ...