This can be improved in a thousand and one ways, then it should be a matter of milliseconds.
Better Queries
This is just your query reformatted with aliases and some noise removed to clear the fog:
SELECT count(DISTINCT t.id)
FROM tickets t
JOIN transactions tr ON tr.objectid = t.id
JOIN attachments a ON a.transactionid = tr.id
WHERE t.status <> 'deleted'
AND t.type = 'ticket'
AND t.effectiveid = t.id
AND tr.objecttype = 'RT::Ticket'
AND a.contentindex @@ plainto_tsquery('frobnicate');
Most of the problem with your query lies in the first two tables tickets
and transactions
, which are missing from the question. I'm filling in with educated guesses.
t.status
, t.objecttype
and tr.objecttype
should probably not be text
, but enum
or possibly some very small value referencing a look-up table.
EXISTS
semi-join
Assuming tickets.id
is the primary key, this rewritten form should be much cheaper:
SELECT count(*)
FROM tickets t
WHERE status <> 'deleted'
AND type = 'ticket'
AND effectiveid = id
AND EXISTS (
SELECT 1
FROM transactions tr
JOIN attachments a ON a.transactionid = tr.id
WHERE tr.objectid = t.id
AND tr.objecttype = 'RT::Ticket'
AND a.contentindex @@ plainto_tsquery('frobnicate')
);
Instead of multiplying rows with two 1:n joins, only to collapse multiple matches in the end with count(DISTINCT id)
, use an EXISTS
semi-join, which can stop looking further as soon as the first match is found and at the same time obsoletes the final DISTINCT
step. Per documentation:
The subquery will generally only be executed long enough to determine
whether at least one row is returned, not all the way to completion.
Effectiveness depends on how many transactions per ticket and attachments per transaction there are.
Determine order of joins with join_collapse_limit
If you know that your search term for attachments.contentindex
is very selective - more selective than other conditions in the query (which is probably the case for 'frobnicate', but not for 'problem'), you can force the sequence of joins. The query planner can hardly judge selectiveness of particular words, except for the most common ones. Per documentation:
join_collapse_limit
(integer
)
[...]
Because the query planner does not always choose the optimal
join order, advanced users can elect to temporarily set this variable
to 1, and then specify the join order they desire explicitly.
Use SET LOCAL
for the purpose to only set it for the current transaction.
BEGIN;
SET LOCAL join_collapse_limit = 1;
SELECT count(DISTINCT t.id)
FROM attachments a -- 1st
JOIN transactions tr ON tr.id = a.transactionid -- 2nd
JOIN tickets t ON t.id = tr.objectid -- 3rd
WHERE t.status <> 'deleted'
AND t.type = 'ticket'
AND t.effectiveid = t.id
AND tr.objecttype = 'RT::Ticket'
AND a.contentindex @@ plainto_tsquery('frobnicate');
ROLLBACK; -- or COMMIT;
The order of WHERE
conditions is always irrelevant. Only the order of joins is relevant here.
Or use a CTE like @jjanes explains in "Option 2". for a similar effect.
Indexes
B-tree indexes
Take all conditions on tickets
that are used identically with most queries and create a partial index on tickets
:
CREATE INDEX tickets_partial_idx
ON tickets(id)
WHERE status <> 'deleted'
AND type = 'ticket'
AND effectiveid = id;
If one of the conditions is variable, drop it from the WHERE
condition and prepend the column as index column instead.
Another one on transactions
:
CREATE INDEX transactions_partial_idx
ON transactions(objecttype, objectid, id)
The third column is just to enable index-only scans.
Also, since you have this composite index with two integer columns on attachments
:
"attachments3" btree (parent, transactionid)
This additional index is a complete waste, delete it:
"attachments1" btree (parent)
Details:
GIN index
Add transactionid
to your GIN index to make it a lot more effective. This may be another silver bullet, because it potentially allows index-only scans, eliminating visits to the big table completely.
You need additional operator classes provided by the additional module btree_gin
. Detailed instructions:
"contentindex_idx" gin (transactionid, contentindex)
4 bytes from an integer
column don't make the index much bigger. Also, fortunately for you, GIN indexes are different from B-tree indexes in a crucial aspect. Per documentation:
A multicolumn GIN index can be used with query conditions that involve
any subset of the index's columns. Unlike B-tree or GiST, index search
effectiveness is the same regardless of which index column(s) the
query conditions use.
Bold emphasis mine. So you just need the one (big and somewhat costly) GIN index.
Table definition
Move the integer not null columns
to the front. This has a couple of minor positive effects on storage and performance. Saves 4 - 8 bytes per row in this case.
Table "public.attachments"
Column | Type | Modifiers
-----------------+-----------------------------+------------------------------
id | integer | not null default nextval('...
transactionid | integer | not null
parent | integer | not null default 0
creator | integer | not null default 0 -- !
created | timestamp | -- !
messageid | character varying(160) |
subject | character varying(255) |
filename | character varying(255) |
contenttype | character varying(80) |
contentencoding | character varying(80) |
content | text |
headers | text |
contentindex | tsvector |
I hope you are willing to do some programming outside of SQL.
To find the recipients
that need the most pruning:
SELECT recipient
FROM inbox
GROUP BY recipient
HAVING COUNT(*) > 30
ORDER BY COUNT(*) DESC
LIMIT 100
Then, for each recipient, first find the 31st id:
SELECT id
FROM inbox
WHERE recipient = $recipient
ORDER BY id DESC
LIMIT 30,1
And delete the excess baggage:
DELETE FROM inbox
WHERE recipient = $recipient
AND id < $id
You would need INDEX(recipient, id)
.
This process could be running continually. The first SQL would be the slowest, but it would run "Using index" and probably not be too bad, maybe 0.1 sec for 200K rows in inbox.
If you want to stay in SQL, this would let you do one recipient at a time:
BEGIN;
SELECT @recipient := recipient
FROM inbox
GROUP BY recipient
HAVING COUNT(*) > 30
ORDER BY COUNT(*) DESC
LIMIT 1
FOR UPDATE;
SELECT @id := id
FROM inbox
WHERE recipient = @recipient
ORDER BY id DESC
LIMIT 30,1
FOR UPDATE;
DELETE FROM inbox
WHERE recipient = @recipient
AND id < @id;
COMMIT;
Then, put that in a Stored Procedure with a loop around it. And, if you want to be further 'nice', add a SELECT SLEEP(1);
in the loop but outside the transaction.
(Trying to do the entire thing in a single statement makes my brain hurt.)
Best Answer
To know what performance you will have you have to test on your data. I obviously can't do that so I made up my own xml data to test the two queries you have in this question.
Create a table with 5000 rows containing an XML document of 9475 characters in 415 nodes:
Execute the queries to search for a value that is present in the first node (
rpc
) and another value that is present in the last node (SERVER ROLE
).The IO for the different queries is the same so here is the output from using
set statistics time on
Search for
rpc
with charindex:Search for
rpc
with xml existSearch for
SERVER ROLE
with charindexSearch for
SERVER ROLE
with xml existClear winner in both cases is the XML query. It does a better job of scanning the entire XML and it does a much better job of early termination when the search string is found.
This is true for the test data above using SQL Server 2012. It could be different for you with your data and your search strings. You have to test to know what is best for you.
NOTE: As stated in the answer to your other question, the two queries above will not return the same result bucause the XML query only search node values where the charindex query searches the entire XML document including nodenames and markup.