Postgresql – Iterate over smaller table instead of bigger one

indexjoin;postgresql

So, I have 2 tables:

events
(
    event_id primary key
    date,
    -- other data
)

actions
(
   action_id primary key,
   event_id references events(event_id),
   -- other data
)

(relation is 1 event to many actions)

The first table is much bigger than the second one for every specific day (that is, most of the event don't have any actions corresponded to them)

I want to do a queries like

SELECT actions.*, FROM actions JOIN events ON actions.event_id = events.event_id WHERE date=$some_fixed_date

With correct index on table events this query basically iterates over all the events on $some_fixed_date.
But I want to only iterate over all the actions on $some_fixed_date.

Reasonable solution seems to create an index on actions that will contain data(date) from different table(events). Is it possible?

My current approach is to add field date to actions and add index on that. It basically solves all my problems, but has other drawbacks:

  • date is stored twice (but this is minor)
  • it's easy to make data inconsistent
  • I should think when I am querying the database which of conditions (WHERE events.date= or WHERE actions.date=) will help more or use both (With this specific query it's obvious/can be tested, but it may depend on other WHERE conditions, e.g if I have events.type_of_event='some_rare_type' and have index on (date, type_of_event) it's better to use this index instead)

So my questions is: what do I do to avoid looping over bigger table here?

UPD: I've populated some dummy data to get the execution plans:

DROP TABLE actions;
DROP TABLE events;
CREATE TABLE events
(
    event_id SERIAL primary key,
    date     bigint
);
CREATE TABLE actions
(
    action_id serial primary key,
    event_id  bigint references events (event_id),
    date      bigint
);

CREATE INDEX event_dates ON events(date);
CREATE INDEX action_dates ON actions(date);

TRUNCATE actions, events RESTART IDENTITY;

CREATE OR REPLACE PROCEDURE fill1()
    LANGUAGE plpgsql
AS
$$
DECLARE
    ev_id integer;
BEGIN
    FOR my_date IN 1..300
        LOOP
            insert into events(date) select my_date from generate_series(1, 1000000) s(i);
            COMMIT ;
            FOR j in 1..1000
                LOOP
                    INSERT INTO events(date) VALUES (my_date) RETURNING event_id INTO ev_id;
                    INSERT INTO actions(date, event_id) VALUES (my_date, ev_id);
                    COMMIT;
                END LOOP;
        END LOOP;
END;
$$;

CALL fill1();


-- SELECT *
-- FROM events;
--
-- SELECT *
-- FROM actions;

EXPLAIN ANALYZE SELECT SUM(action_id) FROM actions JOIN events ON actions.event_id = events.event_id WHERE events.date=42

"Finalize Aggregate  (cost=36680.19..36680.20 rows=1 width=8) (actual time=474.883..481.246 rows=1 loops=1)"
"  ->  Gather  (cost=36679.97..36680.18 rows=2 width=8) (actual time=474.872..481.239 rows=3 loops=1)"
"        Workers Planned: 2"
"        Workers Launched: 2"
"        ->  Partial Aggregate  (cost=35679.97..35679.98 rows=1 width=8) (actual time=463.010..463.014 rows=1 loops=3)"
"              ->  Parallel Hash Join  (cost=6744.17..35678.78 rows=478 width=4) (actual time=392.698..462.978 rows=333 loops=3)"
"                    Hash Cond: (events.event_id = actions.event_id)"
"                    ->  Parallel Index Scan using event_dates on events  (cost=0.57..22541.95 rows=477978 width=4) (actual time=2.374..175.255 rows=333667 loops=3)"
"                          Index Cond: (date = 42)"
"                    ->  Parallel Hash  (cost=3675.71..3675.71 rows=176471 width=12) (actual time=76.658..76.659 rows=100000 loops=3)"
"                          Buckets: 131072  Batches: 4  Memory Usage: 4608kB"
"                          ->  Parallel Seq Scan on actions  (cost=0.00..3675.71 rows=176471 width=12) (actual time=0.015..28.225 rows=100000 loops=3)"
"Planning Time: 0.207 ms"
"Execution Time: 481.286 ms"
EXPLAIN ANALYZE SELECT SUM(action_id) FROM actions JOIN events ON actions.event_id = events.event_id WHERE actions.date=42
"Aggregate  (cost=3132.14..3132.15 rows=1 width=8) (actual time=2.013..2.014 rows=1 loops=1)"
"  ->  Nested Loop  (cost=0.99..3129.41 rows=1090 width=4) (actual time=0.028..1.944 rows=1000 loops=1)"
"        ->  Index Scan using action_dates on actions  (cost=0.42..306.31 rows=1090 width=12) (actual time=0.017..0.164 rows=1000 loops=1)"
"              Index Cond: (date = 42)"
"        ->  Index Only Scan using events_pkey on events  (cost=0.57..2.59 rows=1 width=4) (actual time=0.002..0.002 rows=1 loops=1000)"
"              Index Cond: (event_id = actions.event_id)"
"              Heap Fetches: 1000"
"Planning Time: 0.183 ms"
"Execution Time: 2.034 ms"

On this amount of data perceived difference is not that big, but already visible. Unfortunately I don't have an instance of postgresql with more spare disk space at the moment but increasing number of dates increases the difference (Parallel Seq Scan on actions becomes longer while Index Scans essentially don't because they are all fixed size)

Best Answer

You cannot have an index on actions that uses columns in events, so yes, you probably won't be able to make this query fast without some denormalization.

The best option is to store the date on actions as you suggest – you could use a trigger to keep the values synchronized, then you won't run the risk of inconsistencies:

  • One AFTER trigger on events updates the related actions whenever a date changes.

  • A second trigger on actions makes sure that after any change to date, the new value is the same as in the related events.