Remove the HAVING COUNT(*) > 0
. It's useless, no row will have a count of 0
after a group by.
Change the GROUP BY
to: GROUP BY client_id
. Grouping by institution_id
is not needed, you already have a WHERE
condition that narrows it one value.
As @HLGEM suggested, remove the select *
and use a list of fields that you need. Right now you are repeating data in the client_id
field and that is wasteful of server and network resources.
So the query becomes:
SELECT le.* --- only the fields you need here
--- for example `institution_id` is 224, so
--- there is no need to include that
FROM leads AS le
INNER JOIN (
SELECT MIN(id) AS min_id, institution_id, client_id
FROM leads
WHERE claimed_date IS NOT NULL
AND institution_id = 224
GROUP BY client_id
) AS cl
ON le.institution_id = cl.institution_id
AND le.client_id = cl.client_id
AND le.id <> cl.min_id
WHERE le.disabled = 0 ;
Then add an index on (claimed_date, institution_id, client_id)
to speed up the nested subquery.
If that doesn't really improve the speed much, I think an index on (disabled, institution_id, client_id)
would help the joining.
You could also rewrite the query as:
SELECT le.*
FROM leads AS le
INNER JOIN (
SELECT MIN(id) AS min_id, client_id
FROM leads
WHERE claimed_date IS NOT NULL
AND institution_id = 224
GROUP BY client_id
) AS cl
ON le.client_id = cl.client_id
AND le.id <> cl.min_id
WHERE le.disabled = 0
AND le.institution_id = 224;
Query
Your query is forced to scan the whole table (or the whole index). Every row could be another distinct unit. The only way to substantially shorten the process would be a separate table with all available units - which would help as long as there are substantially fewer units than entries in all_units
.
Since you have ~ 11k units (added in comment) for 25M entries, this should definitely help.
Depending on frequencies of values, there are a couple of query techniques to get your result considerably faster:
- recursive CTE
JOIN LATERAL
- correlated subquery
Details in this related answer on SO:
Only needing the implicit index of the primary key on (unit_id, unit_timestamp)
, this query should do the trick, using an implicit JOIN LATERAL
:
SELECT u.unit_id, a.max_ts
FROM unit u
, (SELECT unit_timestamp AS max_ts
FROM all_units
WHERE unit_id = u.unit_id
ORDER BY unit_timestamp DESC
LIMIT 1
) a;
Excludes units without entry in all_units
, like your original query.
Or a lowly correlated subquery (probably even faster):
SELECT u.unit_id
, (SELECT unit_timestamp
FROM all_units
WHERE unit_id = u.unit_id
ORDER BY unit_timestamp DESC
LIMIT 1) AS max_ts
FROM unit u;
Includes units without entry in all_units
.
Efficiency depends on the number of entries per unit. The more entries, the more potential for one of these queries.
In a quick local test with similar tables (500 "units", 1M rows in big table), the query with correlated subqueries was ~ 500x faster than your original. Index-only scans on the PK index of the big table vs. sequential scan in your original query.
Since your table tends to get even larger rapidly
, a materialized view is probably not an option.
There is also DISTINCT ON
as another possible query technique, but it's hardly going to be faster than your original query, so not the answer you are looking for. Details here:
Index
Your partial_idx
:
CREATE INDEX partial_idx ON all_units (unit_id, unit_timestamp DESC);
is not in fact a partial index and also redundant. Postgres can scan indexes backwards at practically the same speed, the PK serves well. Drop this additional index.
Table layout
A couple of points for your table definition.
CREATE TABLE all_units (
unit_timestamp timestamp,
unit_id int4,
lon float4,
lat float4,
speed float4,
status varchar(255), -- might be improved.
PRIMARY KEY (unit_id, unit_timestamp)
);
timestamp(6)
doesn't make much sense, it's effectively the same as just timestamp
, which already saves a maximum of 6 fractional digits.
I switched positions of the first two columns to save 4 bytes of padding, which amounts to ~ 100 MB for 25M rows (exact result depends on status
). Smaller tables are typically faster for everything.
If status
isn't free text, but some kind of standardized note, you could replace it with something a lot cheaper. More about varchar(255)
in Postgres.
Server configuration
You need to configure your server. Most of your settings seem to be conservative defaults. 1 MB on shared_buffers
or work_mem
seems way to low for an installation with millions of rows. And random_pare_cost = 4
is to high for any modern system with plenty of RAM. Start with the manual and the Postgres Wiki:
Best Answer
Would that work?