I need some insights on row-level locking in Postgres.
- How many shared locks (FOR SHARE) can be concurrently on a particular tuple in a relation .
- Will have many shared locks on same tuple cause any performance/or any issue to database.
- If such limitations exist, how to configure them and advisable limits.
Best Answer
Quote from the manual
(Emphasis mine)
So the only performance issue that you might get, is the number of disk writes heavy locking might cause.
While there is an "unlimited" number of row locks in the system, there is a limit on the number of objects (e.g. tables) that can participate in locking for a single transaction, which is controlled through max_locks_per_transaction
That property controls the number of objects one transaction can hold locks for, not the number of locks. A transaction can lock a million rows in a single table. However the default value of 64 for
max_locks_per_transaction
means that a transaction can hold locks for at most 64 tables (but e.g. a million rows in each table). So if your transactions never touch more than 64 tables, there is nothing to worry about.A common reason to increase this value is if you are dropping a lot of tables in a single transaction e.g. through a
drop owned by current_user
statement to clean up a database.