PostgreSQL Many-to-Many Relationship Validation – Ensuring Limits

concurrencymany-to-manypostgresql

In a Postgres DB, I have a relationship as follows:

  1. A Group table has a column userEntranceLimits.
  2. A UserEntrance table has a column userId and a column groupId.
  3. The count of a UserEntrance's distinct users for a specific group may not exceed the group's userEntranceLimits.
  4. Inside of a transaction (whose isolation level is REPEATABLE_READ) for creating a new UserEntrance, I check if the current count of UserEntrances for that group is >= the limit.
  5. If it is not, I proceed to create a new UserEntrance.
  6. In some rare occasions, a group appears with more UserEntrances than its limit.

I understand why this race condition exists, but I do not understand the proper way to prevent it from happening without using the SERIALIZABLE isolation level. We have many concurrent writes on this table, and when we tried to make the transaction SERIALIZABLE before, it did not go too well.

My idea right now is to do a SELECT ... FOR UPDATE and to count the number of occurrences on the application side, but that looks like a bad solution to me. What is the proper solution for this kind of problem?

Best Answer

SELECT ... FOR UPDATE will not work in this case. It still requires SERIALIZABLE isolation level because what you need to do is to prevent other sessions from inserting rows with the same groupId. And to do this you need to lock a key (what actually SERIALIZABLE level does). What you can try to do?:

  1. check and adjust your indexes (to not lock more rows than you actually need)
  2. revise your code to make your transaction as small as possible
  3. try to emulate locks using separate table (with the list of groupids and locked flag) and updating a row for required groupid before start your transaction and after the end of it.