I have table A with 15M records and table B with 5k records. I need to perform an inner join on both but the query time is considerably high.
explain analyze
SELECT distinct(a.student_id), b.student_name, a.class_year
FROM table_a a
INNER JOIN table_b b on a.student_id = b.student_id;
Explain Plan
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------------
Unique (cost=3628096.85..3779293.37 rows=11780855 width=50) (actual time=35421.004..50690.702 rows=5078 loops=1)
-> Sort (cost=3628096.85..3665895.98 rows=15119652 width=50) (actual time=35421.002..46385.451 rows=14264755 loops=1)
Sort Key: a.student_id, b.student_name, a.class_year
Sort Method: external merge Disk: 890528kB
-> Hash Join (cost=242.20..1308298.78 rows=15119652 width=50) (actual time=3.877..22332.795 rows=14264755 loops=1)
Hash Cond: ((a.student_id)::text = (b.student_id)::text)
-> Seq Scan on table_a a (cost=0.00..1268336.52 rows=15119652 width=25) (actual time=0.035..6168.042 rows=15119652 loops=1)
-> Hash (cost=174.31..174.31 rows=5431 width=45) (actual time=3.822..3.822 rows=5431 loops=1)
Buckets: 8192 Batches: 1 Memory Usage: 483kB
-> Seq Scan on table_b b (cost=0.00..174.31 rows=5431 width=45) (actual time=0.008..1.886 rows=5431 loops=1)
Planning time: 2.386 ms
Execution time: 50822.593 ms
(12 rows)
I have an index on table_a
for student_id
"student_id" btree (student_id)
Best Answer
You could try to first get the rows you are interested in, then do the join. Assuming you want the row with the highest class_year per student you can try:
Or you could try a GROUP BY instead, which can be done using a parallel aggregation since Postgres 11:
An index on
table_a (student_id, class_year)
should help with that.