Tables' size in the PostgreSQL DB are big while the number of rows is relatively small (less then 500K rows).
Vacuum of the specific table did not solve the problem.
Only by deleting and importing the table from a previous dump file did solve the problem: the tables' size are back to normal. Consequently, all the queries on these tables became very fast while they were lasting for long before the import operation.
Note: auto vacuum is on by default
Postgresql – Big table size in PostgreSQL while number of rows is small
postgresql-9.1
Related Question
- Postgresql – Set field values to newly imported rows in PostgreSQL table with existing data
- Postgresql out of memory while trying to a request of ~500mb
- PostgreSQL Vacuum – Do Insert and Update Wait During Vacuum?
- PostgreSQL – How to Perform Partial Dump of Large Objects (pg_largeobject Table)
- Postgresql – How to validate column while creating Table
- PostgreSQL – Alter Table with Huge Data
- Postgresql – Number of times PostgreSQL database restarted
Best Answer
Autovacuum doesn't try to release space back to the OS. That's often counterproductive to do, since PostgreSQL would just have to allocate it again later, and allocating new storage is slower than re-using already allocated storage.
If you want to release space back to the operating system you can use
VACUUM FULL
on the table(s) in question, or the whole database. This will actually make tables with lots of inserts/updates slower unless you also set aFILLFACTOR
of less than 100 first so there's space reserved.