Postgresql – Why is PostgreSQL choosing the more expensive join order

execution-planjoin;postgresqlpostgresql-10

PostgreSQL using defaults, plus

default_statistics_target=1000
random_page_cost=1.5

Version

PostgreSQL 10.4 on x86_64-pc-linux-musl, compiled by gcc (Alpine 6.4.0) 6.4.0, 64-bit

I've vacuumed and analyzed. The query is very straightforward:

SELECT r.price
FROM account_payer ap
  JOIN account_contract ac ON ap.id = ac.account_payer_id
  JOIN account_schedule "as" ON ac.id = "as".account_contract_id
  JOIN schedule s ON "as".id = s.account_schedule_id
  JOIN rate r ON s.id = r.schedule_id
WHERE ap.account_id = 8

Every id column is the primary key, and everything being joined on is a foreign key relationship, and each foreign key has an index. Plus an index for account_payer.account_id.

It takes 3.93s to return 76k rows.

Merge Join  (cost=8.06..83114.08 rows=3458267 width=6) (actual time=0.228..3920.472 rows=75548 loops=1)
  Merge Cond: (s.account_schedule_id = "as".id)
  ->  Nested Loop  (cost=0.57..280520.54 rows=6602146 width=14) (actual time=0.163..3756.082 rows=448173 loops=1)
        ->  Index Scan using schedule_account_schedule_id_idx on schedule s  (cost=0.14..10.67 rows=441 width=16) (actual time=0.035..0.211 rows=89 loops=1)
        ->  Index Scan using rate_schedule_id_code_modifier_facility_idx on rate r  (cost=0.43..486.03 rows=15005 width=10) (actual time=0.025..39.903 rows=5036 loops=89)
              Index Cond: (schedule_id = s.id)
  ->  Materialize  (cost=0.43..49.46 rows=55 width=8) (actual time=0.060..12.984 rows=74697 loops=1)
        ->  Nested Loop  (cost=0.43..49.32 rows=55 width=8) (actual time=0.048..1.110 rows=66 loops=1)
              ->  Nested Loop  (cost=0.29..27.46 rows=105 width=16) (actual time=0.030..0.616 rows=105 loops=1)
                    ->  Index Scan using account_schedule_pkey on account_schedule "as"  (cost=0.14..6.22 rows=105 width=16) (actual time=0.014..0.098 rows=105 loops=1)
                    ->  Index Scan using account_contract_pkey on account_contract ac  (cost=0.14..0.20 rows=1 width=16) (actual time=0.003..0.003 rows=1 loops=105)
                          Index Cond: (id = "as".account_contract_id)
              ->  Index Scan using account_payer_pkey on account_payer ap  (cost=0.14..0.21 rows=1 width=8) (actual time=0.003..0.003 rows=1 loops=105)
                    Index Cond: (id = ac.account_payer_id)
                    Filter: (account_id = 8)
                    Rows Removed by Filter: 0
Planning time: 5.843 ms
Execution time: 3929.317 ms

If I set join_collapse_limit=1, it takes 0.16s, a 25x speedup.

Nested Loop  (cost=6.32..147323.97 rows=3458267 width=6) (actual time=8.908..151.860 rows=75548 loops=1)
  ->  Nested Loop  (cost=5.89..390.23 rows=231 width=8) (actual time=8.730..11.655 rows=66 loops=1)
        Join Filter: ("as".id = s.account_schedule_id)
        Rows Removed by Join Filter: 29040
        ->  Index Scan using schedule_pkey on schedule s  (cost=0.27..17.65 rows=441 width=16) (actual time=0.014..0.314 rows=441 loops=1)
        ->  Materialize  (cost=5.62..8.88 rows=55 width=8) (actual time=0.001..0.011 rows=66 loops=441)
              ->  Hash Join  (cost=5.62..8.61 rows=55 width=8) (actual time=0.240..0.309 rows=66 loops=1)
                    Hash Cond: ("as".account_contract_id = ac.id)
                    ->  Seq Scan on account_schedule "as"  (cost=0.00..2.05 rows=105 width=16) (actual time=0.010..0.028 rows=105 loops=1)
                    ->  Hash  (cost=5.02..5.02 rows=48 width=8) (actual time=0.178..0.178 rows=61 loops=1)
                          Buckets: 1024  Batches: 1  Memory Usage: 11kB
                          ->  Hash Join  (cost=1.98..5.02 rows=48 width=8) (actual time=0.082..0.143 rows=61 loops=1)
                                Hash Cond: (ac.account_payer_id = ap.id)
                                ->  Seq Scan on account_contract ac  (cost=0.00..1.91 rows=91 width=16) (actual time=0.007..0.023 rows=91 loops=1)
                                ->  Hash  (cost=1.64..1.64 rows=27 width=8) (actual time=0.048..0.048 rows=27 loops=1)
                                      Buckets: 1024  Batches: 1  Memory Usage: 10kB
                                      ->  Seq Scan on account_payer ap  (cost=0.00..1.64 rows=27 width=8) (actual time=0.009..0.023 rows=27 loops=1)
                                            Filter: (account_id = 8)
                                            Rows Removed by Filter: 24
  ->  Index Scan using rate_schedule_id_code_modifier_facility_idx on rate r  (cost=0.43..486.03 rows=15005 width=10) (actual time=0.018..1.685 rows=1145 loops=66)
        Index Cond: (schedule_id = s.id)
Planning time: 4.692 ms
Execution time: 160.585 ms

These outputs make little sense to me. The first has a (very high) cost of 280,500 for the nested loop join for the schedule and rate indices. Why is PostgreSQL intentionally choosing that very expensive join first?

Additional information requested via comments

Is rate_schedule_id_code_modifier_facility_idx a compound index?

It is, with schedule_id being the first column. I've made it a dedicated index, and it's chosen by the query planner, but it doesn't affect the performance or otherwise affect the plan.

Best Answer

It seems either your statistics are not accurate (run vacuum analyze to refresh them) either you have correlated columns in your model (and so you'll need to perform create statistics to inform the planer of that fact).

The join_collapse parameter allows the planner to rearrange joins so it performs first the one that fetches less data. But, for performance, we can't let the planner do that on a query with a lot of joins. By default, it's set to 8 joins max. By setting it to 1, you simply disable that ability.

So how does postgres foresees how many rows this query should fetch ? It uses statistics to estimate the number of rows.

What we can see in your explain plans is that there are several inaccurate rows number estimation (first value is estimate, second is actual).

For example,here :

Materialize  (cost=0.43..49.46 rows=55 width=8) (actual time=0.060..12.984 rows=74697 loops=1)

The planner estimated to get 55 rows when he actually got 74697.

What I'd do (if I were in your shoes) is :

  • analyze the five tables involved to refresh statistics
  • Replay explain analyze
  • Look at the difference between estimate row numbers and actual row numbers
  • If estimate row numbers are right, maybe the plan changed and is more efficient. If everything is ok, you might consider changing your autovacuum settings so analyze (and vacuum) performs more often
  • If estimate row numbers are still wrong, it seems like you have correlated data in your table (third normal form violation).You might consider declaring it with CREATE STATISTICS (documentation here)

If you need more information about row estimates and its calculations, you'll find everything you need in Tomas Vondra's conf talk "Create statistics - What is it for ?" (slides here)