This can be improved in a thousand and one ways, then it should be a matter of milliseconds.
Better Queries
This is just your query reformatted with aliases and some noise removed to clear the fog:
SELECT count(DISTINCT t.id)
FROM tickets t
JOIN transactions tr ON tr.objectid = t.id
JOIN attachments a ON a.transactionid = tr.id
WHERE t.status <> 'deleted'
AND t.type = 'ticket'
AND t.effectiveid = t.id
AND tr.objecttype = 'RT::Ticket'
AND a.contentindex @@ plainto_tsquery('frobnicate');
Most of the problem with your query lies in the first two tables tickets
and transactions
, which are missing from the question. I'm filling in with educated guesses.
t.status
, t.objecttype
and tr.objecttype
should probably not be text
, but enum
or possibly some very small value referencing a look-up table.
EXISTS
semi-join
Assuming tickets.id
is the primary key, this rewritten form should be much cheaper:
SELECT count(*)
FROM tickets t
WHERE status <> 'deleted'
AND type = 'ticket'
AND effectiveid = id
AND EXISTS (
SELECT 1
FROM transactions tr
JOIN attachments a ON a.transactionid = tr.id
WHERE tr.objectid = t.id
AND tr.objecttype = 'RT::Ticket'
AND a.contentindex @@ plainto_tsquery('frobnicate')
);
Instead of multiplying rows with two 1:n joins, only to collapse multiple matches in the end with count(DISTINCT id)
, use an EXISTS
semi-join, which can stop looking further as soon as the first match is found and at the same time obsoletes the final DISTINCT
step. Per documentation:
The subquery will generally only be executed long enough to determine
whether at least one row is returned, not all the way to completion.
Effectiveness depends on how many transactions per ticket and attachments per transaction there are.
Determine order of joins with join_collapse_limit
If you know that your search term for attachments.contentindex
is very selective - more selective than other conditions in the query (which is probably the case for 'frobnicate', but not for 'problem'), you can force the sequence of joins. The query planner can hardly judge selectiveness of particular words, except for the most common ones. Per documentation:
join_collapse_limit
(integer
)
[...]
Because the query planner does not always choose the optimal
join order, advanced users can elect to temporarily set this variable
to 1, and then specify the join order they desire explicitly.
Use SET LOCAL
for the purpose to only set it for the current transaction.
BEGIN;
SET LOCAL join_collapse_limit = 1;
SELECT count(DISTINCT t.id)
FROM attachments a -- 1st
JOIN transactions tr ON tr.id = a.transactionid -- 2nd
JOIN tickets t ON t.id = tr.objectid -- 3rd
WHERE t.status <> 'deleted'
AND t.type = 'ticket'
AND t.effectiveid = t.id
AND tr.objecttype = 'RT::Ticket'
AND a.contentindex @@ plainto_tsquery('frobnicate');
ROLLBACK; -- or COMMIT;
The order of WHERE
conditions is always irrelevant. Only the order of joins is relevant here.
Or use a CTE like @jjanes explains in "Option 2". for a similar effect.
Indexes
B-tree indexes
Take all conditions on tickets
that are used identically with most queries and create a partial index on tickets
:
CREATE INDEX tickets_partial_idx
ON tickets(id)
WHERE status <> 'deleted'
AND type = 'ticket'
AND effectiveid = id;
If one of the conditions is variable, drop it from the WHERE
condition and prepend the column as index column instead.
Another one on transactions
:
CREATE INDEX transactions_partial_idx
ON transactions(objecttype, objectid, id)
The third column is just to enable index-only scans.
Also, since you have this composite index with two integer columns on attachments
:
"attachments3" btree (parent, transactionid)
This additional index is a complete waste, delete it:
"attachments1" btree (parent)
Details:
GIN index
Add transactionid
to your GIN index to make it a lot more effective. This may be another silver bullet, because it potentially allows index-only scans, eliminating visits to the big table completely.
You need additional operator classes provided by the additional module btree_gin
. Detailed instructions:
"contentindex_idx" gin (transactionid, contentindex)
4 bytes from an integer
column don't make the index much bigger. Also, fortunately for you, GIN indexes are different from B-tree indexes in a crucial aspect. Per documentation:
A multicolumn GIN index can be used with query conditions that involve
any subset of the index's columns. Unlike B-tree or GiST, index search
effectiveness is the same regardless of which index column(s) the
query conditions use.
Bold emphasis mine. So you just need the one (big and somewhat costly) GIN index.
Table definition
Move the integer not null columns
to the front. This has a couple of minor positive effects on storage and performance. Saves 4 - 8 bytes per row in this case.
Table "public.attachments"
Column | Type | Modifiers
-----------------+-----------------------------+------------------------------
id | integer | not null default nextval('...
transactionid | integer | not null
parent | integer | not null default 0
creator | integer | not null default 0 -- !
created | timestamp | -- !
messageid | character varying(160) |
subject | character varying(255) |
filename | character varying(255) |
contenttype | character varying(80) |
contentencoding | character varying(80) |
content | text |
headers | text |
contentindex | tsvector |
Bound to query
If you can only change the view, not the query: this is 100 % equivalent using a correlated subquery instead of the LEFT JOIN
:
CREATE VIEW the_view_new AS
SELECT a.id, a.name
, (SELECT age_group FROM table_b WHERE id = a.id) AS age_group
FROM table_a a;
Your query as is just reads top and bottom row from the index now, IOW blazingly fast. This is a workaround, gets more complicated with more columns and may exhibit weak spots with other queries.
Better query
Your query:
SELECT MIN("id") AS "min_id",MAX("id") AS "max_id" FROM "the_view" LIMIT 1;
Only needs table_a
. In the view, table_b
is joined with a LEFT JOIN
. Obviously, the query planner does realize that table_b
is not needed for the result of min()
and max()
after a LEFT JOIN
(contrary to what I assumed at first). The query plan does not mention table_b
.
It can also only return a single row, so LIMIT 1
is only complicating matters for the query planner further, to no effect. (Seems not to be the tipping point here.)
The obfuscation confuses the planner enough to read the whole 100 million rows from the index (rows=106434752
), while it would only need to look up first and last row. That's a whole lot of pointless work.
This is simpler, cleaner and faster, while returning the same:
SELECT MIN(id) AS min_id, MAX(id) AS max_id FROM table_a;
As you can see in the EXPLAIN ANALYZE output of this SQL Fiddle with 100k rows in table_a
:
Simple query
(actual time=0.010..0.010 rows=1 loops=1)
Your query:
(actual time=0.012..400.498 rows=100000 loops=1)
This looks like a weakness of the query planner in any case. We should repeat the test with pg 9.4 and possibly file a bug report ...
Bound to use the view
If you are bound to use the view (as commented), there is a workaround to convince the query planner (id
must be NOT NULL - true in your case since PK):
SELECT (SELECT id AS min_id FROM the_view ORDER BY id ASC LIMIT 1) AS min_id
, (SELECT id AS max_id FROM the_view ORDER BY id DESC LIMIT 1) AS max_id;
Check the query plans in the fiddle: two times rows=1
.
Aside: I suggest you take a look at the chapter Identifiers and Key Words in the manual.
Best Answer
There is no way to fix this with PostgreSQL 9.1
Up until 9.2 (including) any referencing table was locked during an update.
This was fixed in the PostgreSQL 9.3 (also unsupported by now)
Quote from the release notes: