Postgresql – Slow access to table in postgresql despite vacuum

postgresqlvacuum

I'm running PG 8.4

I had a table with about 20K records. Any query accessing this table with more than 1 record (e.g. joins) would be very very slow. Even a count would take like 20 seconds.

Stats showed 5 million live_tuples.

I tried vacuum, vacuum analyze, and in the end, vacuum full. Nothing changes in terms of speed. And stats still showed the same thing.

I ended up creating a new table, insert all records in the new table and drop/rename the old one.

The new table now runs lightning fast.

Any idea of:

1) What would have been the more correct handling? I'm looking for a better way to do this as one of the side effect is that the views which were pointing to the table got modified by the "alter table rename to" statement and unbeknownst to me, the views were pointing to the old table for 2 days!

2) Why would that happen in the first place?

Best Answer

1) I would have tried cluster followed by analyze. My only hesitation is I am not 100% sure what happened here. Is it possible there was some index corruption as well? Reindex might have helped? Given that the stats entries were way off, is it possible something was corrupt elsewhere regarding the relation's OID?

2) I have no idea. I have never seen anything like that before. I would need a lot more diagnostic information even to hazard a guess. If it happens again it would be worth knowing the physical size of the table, for example.

Edit: Remembering that 8.4 still had max_fsm_pages, if that was set too low between vacuums that could cause this sort of table bloat.

Edit2: Vacuum full, however should have been able to handle bloat due to max_fsm_pages, at least it does in my experience, suggesting again to me that something deeper was wrong.