Postgresql – out of shared memory error when doing pg_dump

lockingpg-dumppostgresqlpsql

I have a postgres database with 3 schemas and around 1200 tables. When I try to take a schema only pg_dump of it using the command, pg_dump -h localhost -p 5432 -U postgres -d my_test_db -s, I sometimes get error

ERROR: out of shared memory
HINT: You might need to increase max_locks_per_transaction

This case also occurs in a random manner, when I re-run the same query, it works well. Is there any ways to overcome this

Best Answer

pg_dump takes share locks on each table it dumps. So you need to configure Postgres to support that many locks It's unclear to me if you have 3 schemas with 1200 tables each - which would mean a total of 3600 tables, or if there are 1200 table distributed across 3 schemas.

Quote from the manual

The shared lock table tracks locks on max_locks_per_transaction * (max_connections + max_prepared_transactions) objects (e.g., tables); hence, no more than this many distinct objects can be locked at any one time. This parameter controls the average number of object locks allocated for each transaction; individual transactions can lock more objects as long as the locks of all transactions fit in the lock table

Assuming everything is configured as the defaults, you wind up with 64 * (100 + 0) = 6400 objects that can be locked. If pg_dump is the only thing running on your database the default configuration should be fine.

The fact that you only occasionally get that error, means that you have concurrent transactions that also use many object locks so that pg_dump can't allocate enough locks.

You should increase that value nonetheless to avoid the "occasional" errors. I would start with increasing it to 128.