There is no priority system currently for writes (or reads, though you can send reads to secondaries) - the closest thing you will get is yielding. For long running operations and for operations that it predicts will page in data from disk, MongoDB will yield the lock and allow other operations through, essentially interleaving operations:
If you wanted to make sure that the less important writes are throttled somewhat you could rate limit them by ensuring they are replicated W=2, REPLICAS_SAFE, or similar writes (depending on your driver). See here for the command behind such implementations on the MongoDB side - take a look at your driver docs for the relevant equivalent there.
http://www.mongodb.org/display/DOCS/getLastError+Command#getLastErrorCommand-%7B%7Bw%7D%7D
There would then be a slight delay as the write waits for replication out to the secondaries, allowing your other, more important writes their shot at the write lock.
http://www.mongodb.org/display/DOCS/How+does+concurrency+work
In terms of the future, with 2.2 due out shortly, you will get database level locking, so as long as your 2 different profiles/priorities are in different databases you should have no lock contention (IO/RAM contention may still exist, of course).
Finally, in terms of other things to look at, for the line by line type read, I would look at capped collections and tailable cursors - see if they fit your use case:
http://www.mongodb.org/display/DOCS/Tailable+Cursors
Your mongo stat shows higher number of updates vs inserts. One thing that could cause high write lock issues is if your updates typically are increasing the document size and causing the document to move in the data file. We ran into this ourselves, but we were working with mongo support at the time to figure out so I don't remember what metric or stat would tell you this is the case. This would likely only be an issue if your document sizes were very large. We ended up splitting out a sub array that was always being added to into its own collection so that we were just adding new documents instead of modifying an existing one.
The usePowerOf2Sizes flag on the collection can also help alleviate this by giving the documents room for growth. This is apparently the default now on 2.6, but you would need to turn it on if you're not on 2.6 yet. Setting that is described here: http://docs.mongodb.org/manual/reference/command/collMod/
Best Answer
At present ( meaning as of writing ) there really isn't an actual "official release date" for MongoDB 3.0 or is there "really" an official source for the repository of binary builds for any platform.
Results vary for each distribution, but in this case you are probably best following the official documentation links and changing the version in the required "drop-down" to "3.0 (upcoming)" (and again, as of writing). The "repo" location links have been consistently changed over most OS distributions as of the 3.0 pending release. Url's in the documentation likely need "massaging".
Right now the RHEL/CentOS repo for current is here: http://repo.mongodb.org/yum/redhat/7/mongodb-org/testing/x86_64/RPMS/
Following that you can install a "binary build" of the "current release candidate" for your own usage. Failing that then just go for something like: https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-rhel70-3.0.0-rc11.tgz and deploy according to your platform mapping of locations and configurations.
When it's released then it will be released. Contact your local MUG for more info if someone is prepared to give it to you. Otherwise just live with "Soon".