Postgresql – First run of planner very slow on massively partitioned PostgreSQL 9.6 table

execution-planpartitioningperformancepostgresql-9.6query-performance

I have a rather massive table of financial data in Postgres 9.6. It is partitioned on two levels, first by trading day (DATE type) and subsequently by security ID (BIGINT). Overall the table comprises 20 per-day paritions which are each further subdivided into 15 per-instrument partitions. I use the conventional method of partitioning using CHECK constraints. The partitions are unlogged tables which I load in-bulk via COPY.

When prforming EXPLAIN ANALYZE on trivial queries which can be satisfied from a single partition I see that the first run of the planner can take up to 30 seconds in some cases. I suppose this has to do with some attributes of the tables which the planner uses taking time to load into memory. What's causing this behavior? How can I "warm up" the database to avoid this lengthy planning which surprises my users?

Example EXPLAIN ANALYZE output:

trading_offline=# EXPLAIN ANALYZE--                        
select * from public."ticks"
where trading_day ='2016-06-20'::date and inst_id= 1535681 and is_trade = true
and qty<-200;

First time – planning took 2.4 seconds:

                                        QUERY PLAN                                                       
-----------------------------------------------------------------------------------------------------------------------
Append  (cost=0.00..0.00 rows=1 width=161) (actual time=0.016..0.016 rows=0 loops=1)
->  Seq Scan on ticks  (cost=0.00..0.00 rows=1 width=161) (actual time=0.014..0.014 rows=0 loops=1)
        Filter: (is_trade AND (qty < '-200'::integer) AND (trading_day = '2016-06-20'::date) AND (inst_id = 1535681))
Planning time: 2415.293 ms
Execution time: 0.051 ms
(5 rows)

xeur_trading_offline=# explain analyze select * from xeur."ticks"
where trading_day ='2016-06-20'::date and inst_id= 1535681 and is_trade = true
and qty<-200;

Subsequent times – planning is down to ~ 40 ms:

                                   QUERY PLAN                                                       
-----------------------------------------------------------------------------------------------------------------------
Append  (cost=0.00..0.00 rows=1 width=161) (actual time=0.017..0.017 rows=0 loops=1)
->  Seq Scan on ticks  (cost=0.00..0.00 rows=1 width=161) (actual time=0.017..0.017 rows=0 loops=1)
        Filter: (is_trade AND (qty < '-200'::integer) AND (trading_day = '2016-06-20'::date) AND (inst_id = 1535681))
Planning time: 42.067 ms
Execution time: 0.062 ms
(5 rows)

                       QUERY PLAN                                                       
-----------------------------------------------------------------------------------------------------------------------
Append  (cost=0.00..0.00 rows=1 width=161) (actual time=0.024..0.024 rows=0 loops=1)
->  Seq Scan on ticks  (cost=0.00..0.00 rows=1 width=161) (actual time=0.024..0.024 rows=0 loops=1)
        Filter: (is_trade AND (qty < '-200'::integer) AND (trading_day = '2016-06-20'::date) AND (inst_id = 1535681))
Planning time: 42.082 ms
Execution time: 0.070 ms
(5 rows)

Best Answer

PostgreSQL holds lot of catalog attributes in session based system cache. When query uses lot of partitions (more than 100), then first plannning requires lot of reading operations. And it can be too slow.

The number of partitions should not be bigger than 100. If you have more, then you can see some artefacts.

In your case, the connection pooling can be solution. Try to install pgbouncer.