A few thoughts. Consider range partitioning on "timestamp", it will reduce the amount of work your query have to do. Further optimization might be to calculate and store the agg for "closed" partitions. You will of course have to recalculate this when you modify historical information.
As a bonus it will be much easier to roll-out historical data that is no longer needed.
As mentioned this is just some thoughts and reflections, it may or it may not fit your situation. One pitfall may be the number of different intervals that you use for reporting. If larger intervals is not multiples of the partitioning interval this wont work well.
Edit: How is the plan affected by creating an temp table and an index on "timestamp"?
create index X on market_trades ( timestamp );
create temp table T ( time_range tsrange );
insert into T ( time_range )
WITH vals AS (
SELECT '2013-08-19 0:00'::timestamp AS frame_start,
'2013-08-26 0:00'::timestamp AS frame_end,
'17h'::interval AS interval_length
), intervals AS (
SELECT tsrange(start_time,
lead(start_time, 1, frame_end) OVER (ORDER BY start_time NULLS FIRST)) AS time_range
FROM (
SELECT generate_series(frame_start, frame_end, interval_length) AS
start_time, frame_end
FROM vals
) _
WHERE start_time < frame_end
)
SELECT time_range
FROM intervals i;
create index x on T ( time_range );
analyze t;
SELECT time_range, count(td.id) AS agg
FROM T
LEFT JOIN market_trades td
ON td.timestamp <@ T.time_range
GROUP BY T.time_range
ORDER BY T.time_range;
If this is still desirable for you (although not THAT vital), this is a solution to it:
CREATE TABLE rows_affected_in_session (
session_pid int NOT NULL,
session_time timestamptz NOT NULL,
table_name text NOT NULL,
rows_affected int NOT NULL,
PRIMARY KEY (session_pid, session_time, table_name));
CREATE OR REPLACE FUNCTION f_update_rows_affected_in_session (p_table regclass, p_rows_affected int) RETURNS void AS $BODY$
BEGIN
IF NOT EXISTS (SELECT 1 FROM rows_affected_in_session WHERE session_pid = pg_backend_pid() AND session_time = current_timestamp AND table_name = p_table::text) THEN
INSERT INTO rows_affected_in_session (session_pid, session_time, table_name, rows_affected)
VALUES (pg_backend_pid(), current_timestamp, p_table::text, p_rows_affected);
ELSE
UPDATE rows_affected_in_session SET rows_affected = rows_affected + p_rows_affected
WHERE session_pid = pg_backend_pid() AND session_time = current_timestamp AND table_name = p_table::text;
END IF;
END;
$BODY$ LANGUAGE PLPGSQL SECURITY DEFINER;
CREATE OR REPLACE FUNCTION f_rows_affected_in_session() RETURNS TABLE (table_name text, rows_affected int) AS $BODY$
SELECT table_name, rows_affected
FROM rows_affected_in_session
WHERE session_pid = pg_backend_pid() AND session_time = current_timestamp
ORDER BY 1
$BODY$ LANGUAGE SQL SECURITY DEFINER;
You use it by calling the function f_update_rows_affected_in_session(<table>, <rows_affected>)
after every successful insert within triggers (which you can use ROW_COUNT diagnostics), and then use the function f_rows_affected_in_session()
to retrieve the rows affected within the session (you can determine in it exactly which child tables were affected, or sum the results for a grand total). Example:
SELECT f_update_rows_affected_in_session('mytable'::regclass, 1);
SELECT f_update_rows_affected_in_session('mytable'::regclass, 1);
SELECT f_update_rows_affected_in_session('mytable'::regclass, 1);
SELECT * FROM f_rows_affected_in_session();
Best Answer
Having come to this conclusion that there isn't an easy built-in way, here is my quick solution for this:
Create this function to capture the explain analyze plan result:
Run this to get the row count (only tested for select/create temp table statements - regex to the 2nd rows= number):