Postgresql – Optimize a query on two big tables

index-tuningoptimizationpostgresqlquery-performance

I have a very important query in my system that is taking too long to execute due to huge amount of data on tables. I'm a junior DBA and i need the best optimization I can get for this. Tables have approximate 80 million rows each.

Tables are:

tb_pd:

   Column            |  Type   | Modifiers | Storage | Stats target | Description 
---------------------+---------+-----------+---------+--------------+-------------
 pd_id               | integer | not null  | plain   |              | 
 st_id               | integer |           | plain   |              | 
 status_id           | integer |           | plain   |              | 
 next_execution_date | bigint  |           | plain   |              | 
 priority            | integer |           | plain   |              | 
 is_active           | integer |           | plain   |              | 
Indexes:
    "pk_pd" PRIMARY KEY, btree (pd_id)
    "idx_pd_order" btree (priority, next_execution_date)
    "idx_pd_where" btree (status_id, next_execution_date, is_active)
Foreign-key constraints:
    "fk_st" FOREIGN KEY (st_id) REFERENCES tb_st(st_id)

tb_st:

 Column |          Type          | Modifiers | Storage  | Stats target | Description 
--------+------------------------+-----------+----------+--------------+-------------
 st_id  | integer                | not null  | plain    |              | 
 st     | character varying(500) |           | extended |              | 
Indexes:
    "pk_st" PRIMARY KEY, btree (st_id)
Referenced by:
    TABLE "tb_pd" CONSTRAINT "fk_st" FOREIGN KEY (st_id) REFERENCES tb_st(st_id)

My query is:

select s.st                                               
from tb_pd p inner join
     tb_st s on p.st_id = s.st_id
where p.status_id = 1 and
      p.next_execution_date < 1401402110830 and
      p.is_active = 1
order by priority, next_execution_date
limit 20000;

With the indexes I have, the best I got was:

Limit  (cost=1.14..263388.65 rows=20000 width=45)
   ->  Nested Loop  (cost=1.14..456016201.43 rows=34627017 width=45)
         ->  Index Scan using idx_pd_order on tb_pd p  (cost=0.57..161388942.77 rows=34627017 width=16)
               Index Cond: (next_execution_date < 1401402110830::bigint)
               Filter: ((status_id = 1) AND (is_active = 1))
         ->  Index Scan using pk_st on tb_st s  (cost=0.57..8.50 rows=1 width=37)
               Index Cond: (st_id = p.st_id)

I cannot understand the explain very well but it's not using the idx_pd_where to filter the where clause. The idx_pd_where have all the columns used in where clause.

More info about data:
status_id is 95% = 1
is_active is 90% = 1
next_execution_date is in millis and varies a lot. The value compared is the moment of the execution (current time in millis)

Should I create separate indexes for each filtered column or use any different kind of indexes? Maybe some configuration on DBMS?

Best Answer

This is a tricky one. Your main condition is on next_execution_date, but the output is sorted by priority first. The conditions on status_id and is_active only play a minor part.

Better index

Your index idx_pd_order is not a big help, since filtering on non-leading columns of a multi-column index is not very efficient. Postgres is using it - still a lot better than a sequential scan. Details here:
Is a composite index also good for queries on the first field?

idx_pd_where might be a better choice, but not a good one, either. The leading column status_id is not selective at all and just bloats the index. Same goes for the trailing column is_active. And priority is not in the index and has to be fetched from the table, making an index-only scan impossible.

I suggest this partial, multi-column index to start with. (But keep reading!)

CREATE INDEX idx_pd_covering ON tb_pd (next_execution_date, priority, st_id)
WHERE  status_id = 1 AND is_active = 1
  • Since we are only ever interested in rows with status_id = 1and is_active = 1 exclude other rows from the index right away. Size does matter.

  • The remaining (crucial) condition is on next_execution_date, which must come first in the index.

  • priority and st_id are only appended for a possible index-only scan (Postgres 9.2+). If that doesn't take, remove the columns from the index to make is smaller.

Special difficulty

We can now use idx_pd_covering to find qualifying rows quickly, unfortunately we have to look at all qualifying rows to collect the ones with highest priority. As the query plan reveals, Postgres estimates to process 34627017 rows. Sorting 35M rows is going to cost big. That's the tricky part I mentioned at the start. To demonstrate what I am talking about, run EXPLAIN on your query with and without priority in ORDER BY:

SELECT s.st                                               
FROM   tb_pd p
JOIN   tb_st s USING (st_id)
WHERE  p.status_id = 1
AND    p.is_active = 1
AND    p.next_execution_date < 1401402110830
ORDER  BY priority, next_execution_date
LIMIT  20000;

That's your query, formatted only slightly simplified. You should see a huge difference.

Solution

The solution depends on the number of distinct values for priority. For lack of information and for demo purposes I am going to assume only three. Priority 1, 2 and 3.

With a trivial number of distinct priority values, there is a simple solution. Create three partial indexes. All of them together are still smaller than your current indexes idx_pd_order or idx_pd_where (which you might not be needing any more).

CREATE INDEX idx_pd_covering_p1 ON tb_pd (next_execution_date, st_id)
WHERE  priority = 1 AND status_id = 1 AND is_active = 1;

CREATE INDEX idx_pd_covering_p2 ON tb_pd (next_execution_date, st_id)
WHERE  priority = 2 AND status_id = 1 AND is_active = 1;

CREATE INDEX idx_pd_covering_p3 ON tb_pd (next_execution_date, st_id)
WHERE  priority = 3 AND status_id = 1 AND is_active = 1;

Use this query:

SELECT s.st
FROM  (
   (
   SELECT st_id
   FROM   tb_pd
   WHERE  status_id = 1
   AND    is_active = 1
   AND    priority  = 1
   AND    next_execution_date < 1401402110830
   ORDER  BY next_execution_date
   )
   UNION ALL
   (
   SELECT st_id
   FROM   tb_pd
   WHERE  status_id = 1
   AND    is_active = 1
   AND    priority  = 2
   AND    next_execution_date < 1401402110830
   ORDER  BY next_execution_date
   )
   UNION ALL
   (
   ...
   AND    priority  = 3
   ...
   )
   LIMIT  20000
   ) p
JOIN   tb_st s USING (st_id);

This should be dynamite.

  • Strictly speaking, the final order is not guaranteed without an additional ORDER BY clause in the outer query. In the current implementation, the order from the inner query is preserved as long as the outer query is as simple as that. To be sure, you could join right away (which may be a bit slower):

)
SELECT s.st
FROM   tb_pd p
JOIN   tb_st s USING (st_id)
WHERE  p.status_id = 1
AND    p.is_active = 1
AND    p.priority  = 1
AND    p.next_execution_date < 1401402110830
ORDER  BY p.next_execution_date
)
UNION ALL
(
...
)
LIMIT  20000;

.. or carry along priority and next_execution_date to order once more in the outer query (to be absolutely sure), which is probably slower, yet.

  • All parentheses are needed! Related answer.

  • This query just reads tuples from the top of above partial indexes, no sort-step needed at all. All rows are pre-sorted, efficient to boot.

  • UNION ALL queries without a final ORDER BY can stop as soon the number of requested rows in the top-level LIMIT have been fetched. So if there are enough rows in the top priorities, subsequent legs of the UNION ALL query are never executed. This way, only the smaller partial indexes have to be touched.

  • JOIN to tb_st later, should be more efficient.

  • Again, the column st_id is only appended to the index in the hope for an index-only scan. If that works for you, the whole query does not even touch the table tb_pd at all.

General solution for any number of distinct priority values

We have solved this before. There is a complete recipe to automate the creation of partial indexes and a function .. the works:
Can spatial index help a "range - order by - limit" query

Optimize table

Since you are trying to optimize performance and your table is big, I suggest a slightly altered layout for your table tb_pd:

   Column            |  Type
---------------------+--------
 pd_id               | integer
 st_id               | integer
 next_execution_date | bigint
 priority            | integer  -- or smallint? -- or "char"?
 status_id           | smallint -- or "char"
 is_active           | boolean

This occupies 52 bytes per row on disk, while your current design needs 60 bytes. Indexes profit as well. Details:
Configuring PostgreSQL for read performance

Of course, all the basic advice for performance-optimization applies as well.

About "char":

The type "char" (note the quotes) is different from char(1) in that it only uses one byte of storage. It is internally used in the system catalogs as a simplistic enumeration type.