I am wondering if anyone knows the history of why READ COMMITTED
is the default transaction isolation level for PostgreSQL, SQL Server, Oracle, Vertica, DB2, Informix, and Sybase.
MySQL uses default REPEATABLE READ, at least with InnoDB, as do SQLite and NuoDB (they call it "Consistent Read").
Again, I am not asking for what the differences are between different isolation levels, but rather for some explanation of why the default was chosen to be READ COMMITTED
in so many SQL databases. My wild guesses are: small performance benefit, ease of implementation, some recommendation in the SQL standard itself, and/or "that's the way it's always been". The obvious downside of this choice is that READ COMMITTED
tends to be quite counterintuitive for developers and can lead to subtle bugs.
Best Answer
Though, I don't remember any reference mentioned in BOL thus can't provide it here but as per my understanding it is related to locking. Higher level of isolation level can cause locking issues. READ COMMITTED ISOLATION is more towards write locks than read, which fits good in OLTP environment when compared to OLAP. Commercial databases decides default isolation level which fits perfect for their internal algorithm. Choosing isolation level depends upon how RDBMS wants to deal with locking and care about reading correct data. Most of the RDBMS prefers READ COMMITTED for faster read, performance and to minimize locking.
A lower isolation level increase concurrency and descrease waiting for other transaction but increase the chances of reading incorrect data. However, a higher isolation level decreases concurrency and increases waiting for other transaction, but decreases the chance of reading incorrect data.
As per Wikipedia
You can read the complete synopsis here.