PostgreSQL ensure no concurrent reads

postgresqltransaction

I'm sorry if it's a very basic question, but would really appreciate educated advice on this:

Say I have an operation, which first has to read from a table and then insert a row if the read value satisfies a certain condition, and throw if it doesn't, e.g. I have a following table:

id     | value
---------------
serial | integer

And before inserting a row I need to first:

select sum(value) from table;

then if sum + value of the row, which I wanna insert is less than 100 – insert, if no – throw.

I need to somehow make those reads be processed only one-by-one, because if I have 2 concurrent queries, both would read, get that the sum is less than 100 and both insert, which would result in an undesired state, which would have been avoided if queries were executed sequentially.

What would be optimal solution to ensure one-by-one processing here?

So far I've tried doing select for update to lock concurrent read operations, but I have a feeling it might be not optimal here…

I'm also aware I could do that with a before insert trigger but the nature of this logic seems not to fit there because the threshold (100 in the example) comes from application layer runtime and is actually different every time.

I've read a bit about transaction modes (e.g. serializable) but not sure if that's what I need here…

Best Answer

I need to somehow make those reads be processed only one-by-one, because if I have 2 concurrent queries, both would read, get that the sum is less than 100 and both insert, which would result in an undesired state, which would have been avoided if queries were executed sequentially.

That's a typical use case for the serializable isolation level, with an automated retry of the transaction when it fails with a serialization failure (SQLSTATE=40001)

Short of adapting your schema specifically to deal with the concurrency issue, the other techniques you mention are not going to work directly anyway:

  • the BEFORE INSERT trigger is limited in visibility to the already committed transactions, so sum(value) <= 100-newvalue may be true even though another transaction has already inserted a row that will make this untrue as soon as it commits.

  • In the select for update technique, what row(s) is it supposed to lock? Even if doing select for update on the entire table, that does not prevent reads or new inserts. The select for update technique would work if it targeted a single pre-existing row, that the transaction would need to lock before proceeding further, implementing effectively a critical section. But that table/row does not exist in your schema as mentioned in the question.