Postgresql – How to take pg_dump of a very large postgres database

backuppostgresqlpostgresql-8.4

I have a production database server running Postgres 8.4 on Cent OS 6.5.The volume of data has increased recently and within few months database now has 18500 tables amounting to approx 350GB database size.

Problem is now I can't even take a pg_dump of this database, Every time I try I get OutOfSharedMemory error for max_number_of_shared_locks (current value is 256 instead of default 64) ?

How can we take dump of such a huge database ? please help..

Best Answer

Hello you need to change max_locks_per_transaction in the postgresql.conf file typically located in /var/lib/pgsql/data or wherever you installed the data directory for PostgreSQL.

Which was already answered here

You will have to restart the PostgreSQL Service.

The answer above also talks about having to possibly increase shared memory. Here is a link to a PostgreSQL article on that subject.