Postgresql – Does logging a SLOW query impact postgres performance

postgresql

Context…

Our Production system is reporting(occasionally) query timing that is taking longer than usual.

I have already measured EXPLAIN ANALYZE against the SQL statement but nothing seems to explain the longer than usual query timing.

cprdb=# explain ANALYZE select ror from crns where crn='8000440008' limit 1;
                                                     QUERY PLAN
--------------------------------------------------------------------------------------------------------------------
 Limit  (cost=0.29..8.30 rows=1 width=6) (actual time=0.092..0.092 rows=1 loops=1)
   ->  Index Scan using crn_idx on crns  (cost=0.29..8.30 rows=1 width=6) (actual time=0.091..0.091 rows=1 loops=1)
         Index Cond: (crn = '8000440008'::bpchar)
 Planning time: 0.200 ms
 Execution time: 0.132 ms

My next bet is to log the PostgreSQL (DML) query time using log_min_duration_statement=0 and get a conclusive evidence as to where the problem is.

For that, I want to cross check ..

By Enabling query logging I would not deteriorate the overall PostgreSQL performance right?

Best Answer

How is the production system reporting the query times now? If you are already getting that data, it is questionable that getting it a separate way will be helpful.

But to answer your question, there are two kinds of overhead you will get.

Setting log_min_duration_statement to any non-negative value will cause the system clock to be queried more often. This should be almost negligible on modern common hardware and kernels, but can be quite slow on older things.

The other overhead is actually writing the log entries. If the server is very busy, this is probably going to be quite noticeable with log_min_duration_statement=0. You can probably come up with some threshold greater than 0 will that still log the slow statements, but without logging every single statement.