You write:
Each customer can have multiple sites, but only one should be
displayed in this list.
Yet, your query retrieves all rows. That would be a point to optimize. But you also do not define which site
is to be picked.
Either way, it does not matter much here. Your EXPLAIN
shows only 5026 rows for the site
scan (5018 for the customer
scan). So hardly any customer actually has more than one site. Did you ANALYZE
your tables before running EXPLAIN
?
From the numbers I see in your EXPLAIN
, indexes will give you nothing for this query. Sequential table scans will be the fastest possible way. Half a second is rather slow for 5000 rows, though. Maybe your database needs some general performance tuning?
Maybe the query itself is faster, but "half a second" includes network transfer? EXPLAIN ANALYZE would tell us more.
If this query is your bottleneck, I would suggest you implement a materialized view.
After you provided more information I find that my diagnosis pretty much holds.
The query itself needs 27 ms. Not much of a problem there. "Half a second" was the kind of misunderstanding I had suspected. The slow part is the network transfer (plus ssh encoding / decoding, possibly rendering). You should only retrieve 100 rows, that would solve most of it, even if it means to execute the whole query every time.
If you go the route with a materialized view like I proposed you could add a serial number without gaps to the table plus index on it - by adding a column row_number() OVER (<your sort citeria here>) AS mv_id
.
Then you can query:
SELECT *
FROM materialized_view
WHERE mv_id >= 2700
AND mv_id < 2800;
This will perform very fast. LIMIT
/ OFFSET
cannot compete, that needs to compute the whole table before it can sort and pick 100 rows.
pgAdmin timing
When you execute a query from the query tool, the message pane shows something like:
Total query runtime: 62 ms.
And the status line shows the same time. I quote pgAdmin help about that:
The status line will show how long the last query took to complete. If
a dataset was returned, not only the elapsed time for server execution
is displayed, but also the time to retrieve the data from the server
to the Data Output page.
If you want to see the time on the server you need to use SQL EXPLAIN ANALYZE
or the built in Shift + F7
keyboard shortcut or Query -> Explain analyze
. Then, at the bottom of the explain output you get something like this:
Total runtime: 0.269 ms
This can be improved in a thousand and one ways, then it should be a matter of milliseconds.
Better Queries
This is just your query reformatted with aliases and some noise removed to clear the fog:
SELECT count(DISTINCT t.id)
FROM tickets t
JOIN transactions tr ON tr.objectid = t.id
JOIN attachments a ON a.transactionid = tr.id
WHERE t.status <> 'deleted'
AND t.type = 'ticket'
AND t.effectiveid = t.id
AND tr.objecttype = 'RT::Ticket'
AND a.contentindex @@ plainto_tsquery('frobnicate');
Most of the problem with your query lies in the first two tables tickets
and transactions
, which are missing from the question. I'm filling in with educated guesses.
t.status
, t.objecttype
and tr.objecttype
should probably not be text
, but enum
or possibly some very small value referencing a look-up table.
EXISTS
semi-join
Assuming tickets.id
is the primary key, this rewritten form should be much cheaper:
SELECT count(*)
FROM tickets t
WHERE status <> 'deleted'
AND type = 'ticket'
AND effectiveid = id
AND EXISTS (
SELECT 1
FROM transactions tr
JOIN attachments a ON a.transactionid = tr.id
WHERE tr.objectid = t.id
AND tr.objecttype = 'RT::Ticket'
AND a.contentindex @@ plainto_tsquery('frobnicate')
);
Instead of multiplying rows with two 1:n joins, only to collapse multiple matches in the end with count(DISTINCT id)
, use an EXISTS
semi-join, which can stop looking further as soon as the first match is found and at the same time obsoletes the final DISTINCT
step. Per documentation:
The subquery will generally only be executed long enough to determine
whether at least one row is returned, not all the way to completion.
Effectiveness depends on how many transactions per ticket and attachments per transaction there are.
Determine order of joins with join_collapse_limit
If you know that your search term for attachments.contentindex
is very selective - more selective than other conditions in the query (which is probably the case for 'frobnicate', but not for 'problem'), you can force the sequence of joins. The query planner can hardly judge selectiveness of particular words, except for the most common ones. Per documentation:
join_collapse_limit
(integer
)
[...]
Because the query planner does not always choose the optimal
join order, advanced users can elect to temporarily set this variable
to 1, and then specify the join order they desire explicitly.
Use SET LOCAL
for the purpose to only set it for the current transaction.
BEGIN;
SET LOCAL join_collapse_limit = 1;
SELECT count(DISTINCT t.id)
FROM attachments a -- 1st
JOIN transactions tr ON tr.id = a.transactionid -- 2nd
JOIN tickets t ON t.id = tr.objectid -- 3rd
WHERE t.status <> 'deleted'
AND t.type = 'ticket'
AND t.effectiveid = t.id
AND tr.objecttype = 'RT::Ticket'
AND a.contentindex @@ plainto_tsquery('frobnicate');
ROLLBACK; -- or COMMIT;
The order of WHERE
conditions is always irrelevant. Only the order of joins is relevant here.
Or use a CTE like @jjanes explains in "Option 2". for a similar effect.
Indexes
B-tree indexes
Take all conditions on tickets
that are used identically with most queries and create a partial index on tickets
:
CREATE INDEX tickets_partial_idx
ON tickets(id)
WHERE status <> 'deleted'
AND type = 'ticket'
AND effectiveid = id;
If one of the conditions is variable, drop it from the WHERE
condition and prepend the column as index column instead.
Another one on transactions
:
CREATE INDEX transactions_partial_idx
ON transactions(objecttype, objectid, id)
The third column is just to enable index-only scans.
Also, since you have this composite index with two integer columns on attachments
:
"attachments3" btree (parent, transactionid)
This additional index is a complete waste, delete it:
"attachments1" btree (parent)
Details:
GIN index
Add transactionid
to your GIN index to make it a lot more effective. This may be another silver bullet, because it potentially allows index-only scans, eliminating visits to the big table completely.
You need additional operator classes provided by the additional module btree_gin
. Detailed instructions:
"contentindex_idx" gin (transactionid, contentindex)
4 bytes from an integer
column don't make the index much bigger. Also, fortunately for you, GIN indexes are different from B-tree indexes in a crucial aspect. Per documentation:
A multicolumn GIN index can be used with query conditions that involve
any subset of the index's columns. Unlike B-tree or GiST, index search
effectiveness is the same regardless of which index column(s) the
query conditions use.
Bold emphasis mine. So you just need the one (big and somewhat costly) GIN index.
Table definition
Move the integer not null columns
to the front. This has a couple of minor positive effects on storage and performance. Saves 4 - 8 bytes per row in this case.
Table "public.attachments"
Column | Type | Modifiers
-----------------+-----------------------------+------------------------------
id | integer | not null default nextval('...
transactionid | integer | not null
parent | integer | not null default 0
creator | integer | not null default 0 -- !
created | timestamp | -- !
messageid | character varying(160) |
subject | character varying(255) |
filename | character varying(255) |
contenttype | character varying(80) |
contentencoding | character varying(80) |
content | text |
headers | text |
contentindex | tsvector |
Best Answer
This looks ugly:
but seems to do the trick:
You don't really need the parent-child relationship in the
unit_conversion
table, because the units in the same family are naturally related to each other by the order ofcoefficient
, as long as you have the family identified.