I am trying to setup table level Autovacuuming on postgres database. What all metrices I need to collect and analyze specific to that particular table?
I have to enable autovacuuming on all the tables wherever it is required.(With most inserts/deletes probably). How do I proceed on this?
Thanks
Postgresql – setup table level Autovacuuming on postgres database
autovacuumpostgresqlpostgresql-9.6postgresql-performance
Related Solutions
Table definition
A low hanging fruit first: The UNIQUE
constraint details_id_key
is a total waste of resources. It adds nothing useful over the existing PK details_pkey
.
Where did you get these noisy DDL statements? All the redundant default clauses cloud the view. After trimming the noise:
CREATE TABLE public.details (
value numeric,
created_at timestamptz NOT NULL,
updated_at timestamptz NOT NULL,
effective_date timestamptz,
submission_date timestamptz,
id uuid NOT NULL,
form_id uuid,
order_id uuid,
invoice_id uuid,
customer_id uuid,
seq int8 NOT NULL DEFAULT nextval('details_seq_seq'::regclass),
client_day date,
CONSTRAINT details_pkey PRIMARY KEY (id),
CONSTRAINT details_id_key UNIQUE (id) -- total waste
);
Query
I would have a couple of things to say to your query:
- Use
json_agg(t)
instead ofarray_to_json(array_agg(t))
? Why join to
orders
at all? Can you joininvoices
todetails
directly?JOIN invoices j2 ON j2.id = t1.invoice_id
Then I would ask for the definition of count_pages()
(in your question) and for other details. But given your statement:
I've already developed a much improved version of this query that takes this specific scenario down from 985ms down to 20.
... I might be just wasting time. Rather have a separate question based on your improved version of the query - if you still need it.
Query plan gone wrong
Most of the total time is spent in one of the nested loop steps, where Postgres bases the plan on seriously underestimated row counts in both query plans:
-> Nested Loop (cost=5.19..23499.04 rows=33 width=8) (actual time=1.964..929.479 rows=7166 loops=1) -> Nested Loop (cost=5.19..23499.35 rows=33 width=8) (actual time=0.275..277.738 rows=8413 loops=1)
That's the join between details
and orders
. (I am not even convinced you need orders
in the query at all.) The low estimates also build up in every step that leads up to this nested loop.
If Postgres would know to expect that many rows, it would probably choose a merge join or hash join instead.
The number of actual rows has grown in your second query plan, while the estimate has not. This would indicate that you are not running ANALYZE
enough or that you just had big INSERTs adding more data and that's not reflected in the table statistics, yet. These two predicates in your query are typical suspects for such a situation:
AND t1.effective_date >= '2016-01-28T14:56:31.000Z'
AND t1.updated_at >= '2016-02-07T21:29:50.000Z'
I would guess you add newer rows all the time, your autovacuum
isn't keeping up with the latest stats on the newest rows while your query focuses on just those rows. This can escalate over time or after particularly big INSERTs.
Statistics for your big table
autovacuum
is based on a percentage of rows inserted / updated / deleted. Default settings can be a bit of a trap for your special situation: a big table growing at a constant rate. Especially when mostly the latest additions are relevant for your query.
5M rows, 25K new rows per day. After 10 days, autovacuum
"reckons":
Only 5 % new rows, my default autovacuum_analyze_scale_factor
is 10 %. No need to run ANALYZE
autovacuum_analyze_scale_factor
(floating point
)Specifies a fraction of the table size to add to
autovacuum_analyze_threshold
when deciding whether to trigger anANALYZE
. The default is 0.1 (10% of table size). This parameter can only be set in thepostgresql.conf
file or on the server command line; but the setting can be overridden for individual tables by changing table storage parameters.
Bold emphasis mine.
Read about the other autovacuum
setting, too!
I suggest more aggressive settings - at least for your all-important table details
. Per-table settings are possible:
It is possible to run
ANALYZE
on specific tables and even just specific columns of a table, so the flexibility exists to update some statistics more frequently than others if your application requires it.
Use Storage Parameters for your table. Example:
ALTER TABLE public.details SET (autovacuum_analyze_scale_factor = 0.001);
Related:
And run ANALYZE
manually after big INSERTs with new data, before following up immediately with a (possibly) expensive query involving those newly added rows. Quoting the manual once more:
We recommend that active production databases be vacuumed frequently (at least nightly), in order to remove dead rows. After adding or deleting a large number of rows, it might be a good idea to issue a
VACUUM ANALYZE
command for the affected table. This will update the system catalogs with the results of all recent changes, and allow the PostgreSQL query planner to make better choices in planning queries.
More on autovacuum
:
Index
Less importantly, I would also consider a multicolumn index:
CREATE INDEX details_order_id_timestamps ON public.details
(order_id, effective_date DESC NULLS LAST, updated_at DESC NULLS LAST);
I also do not see any index-only scans, which can also point towards VACUUM
not running enough (compare the manual here) or that there are no covering indexes, or both. It might pay to review which of your many indexes are actually used and which might be missing.
There is no easy way to work around the problem. When you drop columns that other things are referencing, like views or foreign keys, you must manually inform all areas of the schema that have dependencies on that column.
There is no shortcut that I know of around this. No one really makes a habit of reordering tables. Most of the requests are for cosmetic reasons, however if you're really doing it for a valid reason (like table packing) then I wish you the best of luck.
It's been a feature request for a while, and it comes up frequently.
Best Answer
Just leave all the default settings as they are. Autovacuum is enabled by default and will usually do the right thing. In my professional life I find that newbies who are trying to tune the autovacuum parameters usually do more harm than good.
One parameter I'd recommend changing is
autovacuum_vacuum_cost_delay
, which I would lower to 2ms (which is the default setting from v12 on). That way, you run less risk of having a too lazy autovacuum.If you ask a more specific question, you can get a more specific answer.