PostgreSQL – How to Optimize Slow Query Performance

performancepostgresqlquery-performance

I'm building a search interface for a web site that currently searches through approximately 2 million pieces of real estate with approximately 60 million attributes (address, city, state, ZIP, beds, baths, assessed value, etc).

The tables are laid out so that there is a LEAD table (2 million rows) with one entry for each piece of real estate and a corresponding LEAD_DETAIL table (60 million rows) that holds name/value pairs for the attributes.

The LEAD table has the following structure:

LEAD_ID BIGINT PRIMARY KEY
LEAD_TYPE_NAME VARCHAR(50)
CREATED_DATE_TIME TIMESTAMP
MODIFIED_DATE_TIME TIMESTAMP
RECEIVED_EMAIL_ID BIGINT
NEW_LEAD SMALLINT
FDOLD_ID INTEGER

And the LEAD_DETAIL table has the following structure:

LEAD_DETAIL_ID BIGINT PRIMARY KEY
LEAD_ID BIGINT
NAME VARCHAR(100)
VALUE_STRING VARCHAR(1000)
VALUE_NUMERIC NUMERIC
VALUE_DATE DATE

The LEAD table has the following indexes (I've tried lots of indexing strategies to try and get this to work):

"lead_pkey" PRIMARY KEY, btree (lead_id)
"lead_created_date_time_idx" btree (created_date_time)
"lead_created_date_time_idx1" btree (created_date_time DESC)
"lead_created_date_time_lead_id_idx" btree (created_date_time DESC, lead_id)
"lead_fdold_id_idx" btree (fdold_id)
"lead_lead_id_created_date_time_idx" btree (lead_id, created_date_time DESC)
"lead_lead_id_lead_type_name_created_date_time_idx" btree (lead_id, lead_type_name, created_date_time DESC)
"lead_modified_date_time_idx" btree (modified_date_time)

And the LEAD_DETAIL table has the following indexes:

"lead_detail_pkey" PRIMARY KEY, btree (lead_detail_id)
"lead_detail_lead_id_name_key" UNIQUE CONSTRAINT, btree (lead_id, name)
"lead_detail_name_value_date_idx" btree (name, value_date)
"lead_detail_name_value_numeric_idx" btree (name, value_numeric)
"lead_detail_name_value_string_idx" btree (name, value_string)
"lead_detail_upper_value_string_idx" btree (name, upper(value_string::text))

Here is the query that I'm generating:

SELECT   lead.lead_id
FROM     lead, lead_detail
WHERE    lead_detail.name = 'State'
AND      UPPER(lead_detail.value_string) = 'FL'
AND      lead.lead_id = lead_detail.lead_id
ORDER BY lead.created_date_time DESC
LIMIT 5000

Now, the distribution of the data is such that currently every lead is in the state of FL, so this is not exactly a highly "selective" query. More selective queries, e.g. for a small county, are quite fast.

Here is the explain plan on explain.depesz.com

My question is, given that we are not preventing users from issuing non-selective queries, how can we quickly return the top-n results for large results? I was hoping to use an INDEX on the CREATED_DATE_TIME column in conjunction with an ORDER BY and a LIMIT to produce the results quickly.

I've been working on this for hours and could really use some help. Thanks in advance!

Best Answer

Looks like (I'm by no means a Postgres expert) you have about two million rows in LEAD_DETAIL table that satisfy the state condition... those two million rows are retrieved, hash joined to leads, the result sorted to return the top 5000 rows. Could you move the state to leads table, so that it becomes a column there? Create and index on (upper(state), created_date_time) and try again (of course, rewriting your query to use the new state column in leads table).

And by the way, most of your indexes on leads table are useless. Why have lead_created_date_time_idx and lead_created_date_time_idx1 - they are the same, an ascending index can be used also for DESC sorts. Furthermore, why include lead_id in indexes (except the one supporting the primary key constraint)?