I have heavily updated / accessed table where I store serialized java objects. They are in the table for 2-3 hours (also are being updated during that period) and then removed. Size of table is around 300MB. I have spotted it is very, very often VACUUMed and wonder if changing the fillfactor
would help?
Optimal Fillfactor for Caching Table in PostgreSQL
configurationfill-factornosqlpostgresqlvacuum
Related Question
- PostgreSQL auto-vacuum: “skipped frozen” pages causing massive bloating
- PostgreSQL Performance – Choosing Index Fillfactor for Large, Highly Inserted Table
- Postgresql – Postgres 9.4.5 small table grows into huge size – periodic VACUUM FULL needed
- Index Fillfactor and Index Pages in PostgreSQL – Best Practices
- Postgresql – SELECT FOR UPDATE (NOWAIT) on auxilliary table for serializing updates
Best Answer
The key words here are:
Point 1. is indication for a lower fill factor, while 2. is the opposite. It helps performance if multiple row versions are stored on the same data page. H.O.T. updates would achieve that. Read here or here. They need some wiggle room on the data page - like dead tuples or space reserved by a
fillfactor
< 100. But they can only do their thing, if no index involves any of the updated columns, which should be true for your case.Another important factor here would be the tuple size (in comparison to your page size (which is most commonly 8 kb). More details in this related answer:
If the tuple size is 4 kb or more, reducing the fill factor would be futile, since there can never be more than one tuple on a data page. You might as well leave it at
100
(which is the default anyway). However, some data types are "toasted" and stored out-of-line if they exceed a size limit, so tuples requiring that much in the main relation fork are rare.Whatever you do,
VACUUM
will be run often. And that's generally a good thing, I wouldn't worry about that. You create lots of dead tuples.VACUUM
identifies dead rows that are not visible to any open transaction any more. The manual:Bold emphasis mine.
You can play with per-table settings for autovacuum to trigger it less (or more) often for this table only:
Bold emphasis mine. In particular with
autovacuum_vacuum_threshold
andautovacuum_vacuum_scale_factor
. RunningVACUUM
a lot might actually be a good idea, instead of a very lowfillfacter
. That depends on access patterns. If all tuples live, say, 3 hours and each is updated several times, I would still lower thefillfactor
to something like 50. You'll have to test and find the sweet spot.Alternatives
All this aside, since your data seem to be volatile to begin with: use an
UNLOGGED
table:Bold emphasis mine. Don't use this if your server might crash and you still need the data afterwards. But if we are talking about session data for web applications, this might be an acceptable price to pay.
Or, even more radical: Use a key-value store like Redis if you can do without the features and security provided by an RDBMS altogether.