Postgresql – Slow fulltext search due to wildly inaccurate row estimates

execution-planfull-text-searchperformancepostgresqlpostgresql-9.1query-performance

Fulltext queries against this database (storing RT (Request Tracker) tickets) seem to be taking a very long time to execute. The attachments table (containing the fulltext data) is about 15GB.

The database schema is as follows, it's about 2 million rows:

rt4=# \d+ attachments
                                                    Table "public.attachments"
     Column      |            Type             |                        Modifiers                         | Storage  | Description 
-----------------+-----------------------------+----------------------------------------------------------+----------+-------------
 id              | integer                     | not null default nextval('attachments_id_seq'::regclass) | plain    | 
 transactionid   | integer                     | not null                                                 | plain    | 
 parent          | integer                     | not null default 0                                       | plain    | 
 messageid       | character varying(160)      |                                                          | extended | 
 subject         | character varying(255)      |                                                          | extended | 
 filename        | character varying(255)      |                                                          | extended | 
 contenttype     | character varying(80)       |                                                          | extended | 
 contentencoding | character varying(80)       |                                                          | extended | 
 content         | text                        |                                                          | extended | 
 headers         | text                        |                                                          | extended | 
 creator         | integer                     | not null default 0                                       | plain    | 
 created         | timestamp without time zone |                                                          | plain    | 
 contentindex    | tsvector                    |                                                          | extended | 
Indexes:
    "attachments_pkey" PRIMARY KEY, btree (id)
    "attachments1" btree (parent)
    "attachments2" btree (transactionid)
    "attachments3" btree (parent, transactionid)
    "contentindex_idx" gin (contentindex)
Has OIDs: no

I can query the database on it's own very quickly (<1s) with a query such as:

select objectid
from attachments
join transactions on attachments.transactionid = transactions.id
where contentindex @@ to_tsquery('frobnicate');

However, when RT runs a query that's supposed to perform a fulltext index search on the same table, it usually takes hundreds of seconds to complete. The query analyze output is as follows:

Query

SELECT COUNT(DISTINCT main.id)
FROM Tickets main
JOIN Transactions Transactions_1 ON ( Transactions_1.ObjectType = 'RT::Ticket' )
                                AND ( Transactions_1.ObjectId = main.id )
JOIN Attachments Attachments_2 ON ( Attachments_2.TransactionId = Transactions_1.id )
WHERE (main.Status != 'deleted')
AND ( ( ( Attachments_2.ContentIndex @@ plainto_tsquery('frobnicate') ) ) )
AND (main.Type = 'ticket')
AND (main.EffectiveId = main.id);

EXPLAIN ANALYZE output

                                                                             QUERY PLAN 
--------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Aggregate  (cost=51210.60..51210.61 rows=1 width=4) (actual time=477778.806..477778.806 rows=1 loops=1)
   ->  Nested Loop  (cost=0.00..51210.57 rows=15 width=4) (actual time=17943.986..477775.174 rows=4197 loops=1)
         ->  Nested Loop  (cost=0.00..40643.08 rows=6507 width=8) (actual time=8.526..20610.380 rows=1714818 loops=1)
               ->  Seq Scan on tickets main  (cost=0.00..9818.37 rows=598 width=8) (actual time=0.008..256.042 rows=96990 loops=1)
                     Filter: (((status)::text  'deleted'::text) AND (id = effectiveid) AND ((type)::text = 'ticket'::text))
               ->  Index Scan using transactions1 on transactions transactions_1  (cost=0.00..51.36 rows=15 width=8) (actual time=0.102..0.202 rows=18 loops=96990)
                     Index Cond: (((objecttype)::text = 'RT::Ticket'::text) AND (objectid = main.id))
         ->  Index Scan using attachments2 on attachments attachments_2  (cost=0.00..1.61 rows=1 width=4) (actual time=0.266..0.266 rows=0 loops=1714818)
               Index Cond: (transactionid = transactions_1.id)
               Filter: (contentindex @@ plainto_tsquery('frobnicate'::text))
 Total runtime: 477778.883 ms

As far as I can tell, the issue appears to be that it's not using the index created on the contentindex field (contentindex_idx), rather it's doing a filter on a large number of matching rows in the attachments table. The row counts in the explain output also appear to be wildly inaccurate, even after a recent ANALYZE: estimated rows=6507 actual rows=1714818.

I'm not really sure where to go next with this.

Best Answer

This can be improved in a thousand and one ways, then it should be a matter of milliseconds.

Better Queries

This is just your query reformatted with aliases and some noise removed to clear the fog:

SELECT count(DISTINCT t.id)
FROM   tickets      t
JOIN   transactions tr ON tr.objectid = t.id
JOIN   attachments  a  ON a.transactionid = tr.id
WHERE  t.status <> 'deleted'
AND    t.type = 'ticket'
AND    t.effectiveid = t.id
AND    tr.objecttype = 'RT::Ticket'
AND    a.contentindex @@ plainto_tsquery('frobnicate');

Most of the problem with your query lies in the first two tables tickets and transactions, which are missing from the question. I'm filling in with educated guesses.

  • t.status, t.objecttype and tr.objecttype should probably not be text, but enum or possibly some very small value referencing a look-up table.

EXISTS semi-join

Assuming tickets.id is the primary key, this rewritten form should be much cheaper:

SELECT count(*)
FROM   tickets t
WHERE  status <> 'deleted'
AND    type = 'ticket'
AND    effectiveid = id
AND    EXISTS (
   SELECT 1
   FROM   transactions tr
   JOIN   attachments  a ON a.transactionid = tr.id
   WHERE  tr.objectid = t.id
   AND    tr.objecttype = 'RT::Ticket'
   AND    a.contentindex @@ plainto_tsquery('frobnicate')
   );

Instead of multiplying rows with two 1:n joins, only to collapse multiple matches in the end with count(DISTINCT id), use an EXISTS semi-join, which can stop looking further as soon as the first match is found and at the same time obsoletes the final DISTINCT step. Per documentation:

The subquery will generally only be executed long enough to determine whether at least one row is returned, not all the way to completion.

Effectiveness depends on how many transactions per ticket and attachments per transaction there are.

Determine order of joins with join_collapse_limit

If you know that your search term for attachments.contentindex is very selective - more selective than other conditions in the query (which is probably the case for 'frobnicate', but not for 'problem'), you can force the sequence of joins. The query planner can hardly judge selectiveness of particular words, except for the most common ones. Per documentation:

join_collapse_limit (integer)

[...]
Because the query planner does not always choose the optimal join order, advanced users can elect to temporarily set this variable to 1, and then specify the join order they desire explicitly.

Use SET LOCAL for the purpose to only set it for the current transaction.

BEGIN;
SET LOCAL join_collapse_limit = 1;

SELECT count(DISTINCT t.id)
FROM   attachments  a                              -- 1st
JOIN   transactions tr ON tr.id = a.transactionid  -- 2nd
JOIN   tickets      t  ON t.id = tr.objectid       -- 3rd
WHERE  t.status <> 'deleted'
AND    t.type = 'ticket'
AND    t.effectiveid = t.id
AND    tr.objecttype = 'RT::Ticket'
AND    a.contentindex @@ plainto_tsquery('frobnicate');

ROLLBACK; -- or COMMIT;

The order of WHERE conditions is always irrelevant. Only the order of joins is relevant here.

Or use a CTE like @jjanes explains in "Option 2". for a similar effect.

Indexes

B-tree indexes

Take all conditions on tickets that are used identically with most queries and create a partial index on tickets:

CREATE INDEX tickets_partial_idx
ON tickets(id)
WHERE  status <> 'deleted'
AND    type = 'ticket'
AND    effectiveid = id;

If one of the conditions is variable, drop it from the WHERE condition and prepend the column as index column instead.

Another one on transactions:

CREATE INDEX transactions_partial_idx
ON transactions(objecttype, objectid, id)

The third column is just to enable index-only scans.

Also, since you have this composite index with two integer columns on attachments:

"attachments3" btree (parent, transactionid)

This additional index is a complete waste, delete it:

"attachments1" btree (parent)

Details:

GIN index

Add transactionid to your GIN index to make it a lot more effective. This may be another silver bullet, because it potentially allows index-only scans, eliminating visits to the big table completely.
You need additional operator classes provided by the additional module btree_gin. Detailed instructions:

"contentindex_idx" gin (transactionid, contentindex)

4 bytes from an integer column don't make the index much bigger. Also, fortunately for you, GIN indexes are different from B-tree indexes in a crucial aspect. Per documentation:

A multicolumn GIN index can be used with query conditions that involve any subset of the index's columns. Unlike B-tree or GiST, index search effectiveness is the same regardless of which index column(s) the query conditions use.

Bold emphasis mine. So you just need the one (big and somewhat costly) GIN index.

Table definition

Move the integer not null columns to the front. This has a couple of minor positive effects on storage and performance. Saves 4 - 8 bytes per row in this case.

                      Table "public.attachments"
         Column      |            Type             |         Modifiers
    -----------------+-----------------------------+------------------------------
     id              | integer                     | not null default nextval('...
     transactionid   | integer                     | not null
     parent          | integer                     | not null default 0
     creator         | integer                     | not null default 0  -- !
     created         | timestamp                   |                     -- !
     messageid       | character varying(160)      |
     subject         | character varying(255)      |
     filename        | character varying(255)      |
     contenttype     | character varying(80)       |
     contentencoding | character varying(80)       |
     content         | text                        |
     headers         | text                        |
     contentindex    | tsvector                    |