Postgresql – How to stabilize performance on frequently updated table in PostgreSQL

performancepostgresql

I'm trying to find-out a way how to prevent performance degradation in my PostgreSQL database.

The table is updated every minute with 100,000 – 200,000 INSERTs. Every 5 minutes the table is updated with the same scale of aggregated UPDATE and a number of DELETEs. Aggregations and DELETEs are done in a way to keep the size of the table around 10 million records.

Apparently updates are too frequent so AUTO VACUUM cannot be completed. I started to run VACUUM after each DELETE series. That improved behavior a lot.

I've a suspicion that periodic reindexing would help but haven't verified it yet. Actually it's very suspicious that PostgreSQL has a dedicated command for reindexing.

I have played with different fillfactor for the indexes, with no visible difference.

What else should I consider to make keep performance stable? Let's say that slow but stable over time is better than sometimes fast and sometimes very slow.

UPD: vacuum analyzed after each delete + reindex every 6 hours solved the issue.

Best Answer

Some things depend on your PostgreSQL version. On some older versions pay attention to fsm settings to make sure it can track free space in your tables for re-use. You might also look at scheduling a periodic cluster if you can afford the downtime. If not, pay attention to the indexes.