Let's assume transaction A runs in SERIALIZABLE
and transaction B runs in a more permissive isolation level, like READ UNCOMMITTED
. Can Transaction A now suffer from any anomaly introduced by transaction B?
I found this SO question, but the answer doesn't give that information. I understand the referred manual page only for homogeneous scenarios:
The guarantee that any set of concurrent serializable transactions will have the same effect as if they were run one at a time means that if you can demonstrate that a single transaction, as written, will do the right thing when run by itself, you can have confidence that it will do the right thing in any mix of serializable transactions
I found these references in the SQL draft indicating that SERIALIZABLE
transactions will indeed not suffer from any anomaly induced by other isolation level transactions:
CD 9075-1:200x(E) page 27:
The highest isolation level SERIALIZABLE, guarantees serializable execution, meaning that the effect of SQL-transactions that overlap
in time is the same as the effect they would have had, had they not overlapped in time.
CD 9075-2:200x(E) page 124:
Changes made to SQL-data or schemas by an SQL-transaction in an SQL-session may be perceived by that
SQL-transaction in that same SQL-session, and by other SQL-transactions, or by that same SQL-transaction
in other SQL-sessions, at isolation level READ UNCOMMITTED, but cannot be perceived by other SQL-
transactions at isolation level READ COMMITTED, REPEATABLE READ, or SERIALIZABLE until the
former SQL-transaction terminates with a .
That's the standard. But how do the implementations do in reality? I'm particularly interested in the defined behaviour of Postgres and MySQL.
Best Answer
PostgreSQL wiki https://wiki.postgresql.org/wiki/Serializable#PostgreSQL_Implementation states:
In your example since transactions A and B are not both SERIALIZABLE, anomalies can happen.