Postgresql – Searching a big table for a few results without scanning the whole table

performanceperformance-tuningpostgresql

I need to search a very big PostgreSQL table (500+M rows), I want to limit the returned search results, but using the "limit" keyword cant prevent the search over the whole set of data(is it correct?)

Imagine my search result contains 1M rows, but I need only the first 100 records in the search result! does PostgreSQL database have to create this 1M search result rows temporarily in memory and then give me my needed 100 results?

or is there any way to tell PostgreSQL to stop searching as soon as it find 100 records?

Here is my table, of course not populated with 500M records yet!

CREATE TABLE con
(
  id bigserial NOT NULL,
  tag1 integer NOT NULL DEFAULT 0,
  tag2 integer NOT NULL DEFAULT 0,
  ref1 integer NOT NULL DEFAULT 0,
  ref2 integer NOT NULL DEFAULT 0,
  CONSTRAINT con_pkey PRIMARY KEY (id)
)

and the explain analyze for a test query:

explain analyze SELECT * FROM con where tag1 = '64813' and tag2 = '80'
Seq Scan on con  (cost=0.00..3215204.72 rows=2470 width=112) (actual time=0.016..36970.528 rows=7505 loops=1)
   Filter: ((tag1 = 64813) AND (tag2 = 80))
  Rows Removed by Filter: 152519685
Total runtime: 36972.921 ms

Best Answer

but using the "limit" keyword cant prevent the search over the whole set of data(is it correct?)

Correct; it'll just limit how many results are returned.

For some kinds of query it can also limit how many are scanned in the first place, but you can't rely on that in the general case.

does PostgreSQL database have to create this 1M search result rows temporarily in memory and then give me my needed 100 results?

Create, yes. In memory, not necessarily. It'll often spill to disk, or discard unneeded results as it goes so it only keeps the top 100. It depends on the details of the query.

or is there any way to tell PostgreSQL to stop searching as soon as it find 100 records?

Write a query where that is possible. You haven't shown the query, or schema, so this part is impossible to usefully answer.

In general, if you have an ORDER BY on some kind of search relevance field then the whole data set must be searched then filtered for the top-n results.

In short: "It depends". Specifically, it depends on the schema, available indexes, and on exactly what you're searching for and what results you want.

This:

SELECT id 
FROM mytable 
WHERE somefield > 100 
ORDER BY somefield DESC
LIMIT 100;

would only scan the needed rows if there was an index on mytable(somefield DESC).

This:

SELECT id, title
FROM mytable
WHERE title LIKE '%something%'
ORDER BY calculate_relevance(title, 'something')
LIMIT 100;

would always scan the whole table, both because an infix text pattern match (LIKE '%blah%') can't use a b-tree index, and because you can't create an expression index on a function taking a literal parameter like the case of the imaginary calculate_relevance function above.

So... it depends totally on what you're searching for and how.


Is this answer too hand-wavey and general for you? So's the question. Supply specifics, and you'll get more specific answers. A http://sqlfiddle.com/ of the schema with sample data, plus your PostgreSQL version, a table of the expected results, and a description of what you want to find is usually the minimum for this kind of thing.


Update: A few problems here.

  • There's no index on tag1 and/or tag2 so Pg must do a seqscan of the table, searching it until it finds 100 results. This is extremely inefficient.

  • There's no ORDER BY, so PostgreSQL can return whatever results it feels like for a LIMIT and return them in any order.

  • Your schema shows signs of being unnecessarily denormalized. tag1, tag2, etc usually suggest bad table design. I strongly recommend that you study relational database design and normalization.

In this specific case, you'd be able to avoid the full table scan if you CREATE INDEX con_tag1_tag2_idx ON con(tag1, tag2). But that'll only work for searches that use tag1 and optionally also tag2. It won't help (much) with searches for just tag2, searches for ref1, etc. (Thanks Erwin for the important note on multicolumn index use with the first column not included, it was a surprise to me).

Don't create lots of indexes. Each one uses disk space and takes disk I/O to insert, update and delete.

The answer is often proper normalization.

Overall I think you need to buy/borrow a couple of good introductory books on relational database design and basic to intermediate SQL, preferably PostgreSQL-focused, and do some study.