This can be improved in a thousand and one ways, then it should be a matter of milliseconds.
Better Queries
This is just your query reformatted with aliases and some noise removed to clear the fog:
SELECT count(DISTINCT t.id)
FROM tickets t
JOIN transactions tr ON tr.objectid = t.id
JOIN attachments a ON a.transactionid = tr.id
WHERE t.status <> 'deleted'
AND t.type = 'ticket'
AND t.effectiveid = t.id
AND tr.objecttype = 'RT::Ticket'
AND a.contentindex @@ plainto_tsquery('frobnicate');
Most of the problem with your query lies in the first two tables tickets
and transactions
, which are missing from the question. I'm filling in with educated guesses.
t.status
, t.objecttype
and tr.objecttype
should probably not be text
, but enum
or possibly some very small value referencing a look-up table.
EXISTS
semi-join
Assuming tickets.id
is the primary key, this rewritten form should be much cheaper:
SELECT count(*)
FROM tickets t
WHERE status <> 'deleted'
AND type = 'ticket'
AND effectiveid = id
AND EXISTS (
SELECT 1
FROM transactions tr
JOIN attachments a ON a.transactionid = tr.id
WHERE tr.objectid = t.id
AND tr.objecttype = 'RT::Ticket'
AND a.contentindex @@ plainto_tsquery('frobnicate')
);
Instead of multiplying rows with two 1:n joins, only to collapse multiple matches in the end with count(DISTINCT id)
, use an EXISTS
semi-join, which can stop looking further as soon as the first match is found and at the same time obsoletes the final DISTINCT
step. Per documentation:
The subquery will generally only be executed long enough to determine
whether at least one row is returned, not all the way to completion.
Effectiveness depends on how many transactions per ticket and attachments per transaction there are.
Determine order of joins with join_collapse_limit
If you know that your search term for attachments.contentindex
is very selective - more selective than other conditions in the query (which is probably the case for 'frobnicate', but not for 'problem'), you can force the sequence of joins. The query planner can hardly judge selectiveness of particular words, except for the most common ones. Per documentation:
join_collapse_limit
(integer
)
[...]
Because the query planner does not always choose the optimal
join order, advanced users can elect to temporarily set this variable
to 1, and then specify the join order they desire explicitly.
Use SET LOCAL
for the purpose to only set it for the current transaction.
BEGIN;
SET LOCAL join_collapse_limit = 1;
SELECT count(DISTINCT t.id)
FROM attachments a -- 1st
JOIN transactions tr ON tr.id = a.transactionid -- 2nd
JOIN tickets t ON t.id = tr.objectid -- 3rd
WHERE t.status <> 'deleted'
AND t.type = 'ticket'
AND t.effectiveid = t.id
AND tr.objecttype = 'RT::Ticket'
AND a.contentindex @@ plainto_tsquery('frobnicate');
ROLLBACK; -- or COMMIT;
The order of WHERE
conditions is always irrelevant. Only the order of joins is relevant here.
Or use a CTE like @jjanes explains in "Option 2". for a similar effect.
Indexes
B-tree indexes
Take all conditions on tickets
that are used identically with most queries and create a partial index on tickets
:
CREATE INDEX tickets_partial_idx
ON tickets(id)
WHERE status <> 'deleted'
AND type = 'ticket'
AND effectiveid = id;
If one of the conditions is variable, drop it from the WHERE
condition and prepend the column as index column instead.
Another one on transactions
:
CREATE INDEX transactions_partial_idx
ON transactions(objecttype, objectid, id)
The third column is just to enable index-only scans.
Also, since you have this composite index with two integer columns on attachments
:
"attachments3" btree (parent, transactionid)
This additional index is a complete waste, delete it:
"attachments1" btree (parent)
Details:
GIN index
Add transactionid
to your GIN index to make it a lot more effective. This may be another silver bullet, because it potentially allows index-only scans, eliminating visits to the big table completely.
You need additional operator classes provided by the additional module btree_gin
. Detailed instructions:
"contentindex_idx" gin (transactionid, contentindex)
4 bytes from an integer
column don't make the index much bigger. Also, fortunately for you, GIN indexes are different from B-tree indexes in a crucial aspect. Per documentation:
A multicolumn GIN index can be used with query conditions that involve
any subset of the index's columns. Unlike B-tree or GiST, index search
effectiveness is the same regardless of which index column(s) the
query conditions use.
Bold emphasis mine. So you just need the one (big and somewhat costly) GIN index.
Table definition
Move the integer not null columns
to the front. This has a couple of minor positive effects on storage and performance. Saves 4 - 8 bytes per row in this case.
Table "public.attachments"
Column | Type | Modifiers
-----------------+-----------------------------+------------------------------
id | integer | not null default nextval('...
transactionid | integer | not null
parent | integer | not null default 0
creator | integer | not null default 0 -- !
created | timestamp | -- !
messageid | character varying(160) |
subject | character varying(255) |
filename | character varying(255) |
contenttype | character varying(80) |
contentencoding | character varying(80) |
content | text |
headers | text |
contentindex | tsvector |
Your answer basically gets the job done:
SELECT b.id, array_agg(b.stock) AS stock
FROM (
SELECT i.id, COALESCE(i_s.stock, 0) AS stock
FROM item i
CROSS JOIN unnest('{1,2}'::int[]) n
LEFT JOIN item_stock i_s ON i.id = i_s.item_id AND n.n = i_s.shop_id
ORDER BY i.id, n.n
) b
GROUP BY b.id;
Two notable changes:
Order is not guaranteed without ORDER BY
in the subquery or as additional clause to array_aggregate()
(typically more expensive). And that's the core element of your question.
unnest('{1,2}'::int[])
instead of generate_series(1,2)
as requested shop IDs will hardly be sequential all the time.
I also moved the set-returning function from the SELECT
list to a separate table expression attached with CROSS JOIN
. Standard SQL form, but that's just a matter of clarity and taste, not a necessity. At least in Postgres 10 or later. See:
Doing the same with LEFT JOIN LATERAL
and an ARRAY constructor might be a bit faster as we don't need the outer GROUP BY
and the ARRAY constructor is typically faster, too:
SELECT i.id, s.stock
FROM item i
CROSS JOIN LATERAL (
SELECT ARRAY(
SELECT COALESCE(i_s.stock, 0)
FROM unnest('{1,2}'::int[]) n
LEFT JOIN item_stock i_s ON i_s.shop_id = n.n
AND i_s.item_id = i.id
ORDER BY n.n
) AS stock
) s;
Related:
And if you have more than just the two shops, a nested crosstab()
should provide top performance:
SELECT i.id, COALESCE(stock, '{0,0}') AS stock
FROM item i
LEFT JOIN (
SELECT id, ARRAY[COALESCE(shop1, 0), COALESCE(shop2, 0)] AS stock
FROM crosstab(
$$SELECT item_id, shop_id, stock
FROM item_stock
WHERE shop_id = ANY ('{1,2}'::int[])
ORDER BY 1,2$$
, $$SELECT unnest('{1,2}'::int[])$$
) AS ct (id int, shop1 int, shop2 int)
) i_s USING (id);
Needs to be adapted in more places to cater for different shop IDs.
Related:
db<>fiddle here
Index
Make sure you have at least an index on item_stock (shop_id, item_id)
- typically provided by a PRIMARY KEY
on those columns. For the crosstab query, it also matters that shop_id
comes first. See:
Adding stock
as another index expression may allow faster index-only scans. In Postgres 11 or later consider an INCLUDE
item to the PK like so:
PRIMARY KEY (shop_id, item_id) INCLUDE (stock)
But only if you need it a lot, as it makes the index a bit bigger and possibly more susceptible to bloat from updates.
Best Answer
To get all unique pairs of elements from an array of arbitrary length:
You can then join to the
message
table.Without knowing any details of your setup, my educated guess is that a
LATERAL
join will be fastest as it can use the GIN index onmessages.words
- create it if you don't have one yet.SQL Fiddle.