Postgresql – Index not being used in SELECT query

database-designindexperformancepostgresqlpostgresql-performance

I have a table of about 3.25M rows with the follow format in Postgres 9.4.1

CREATE TABLE stats
(
    id serial NOT NULL,
    type character varying(255) NOT NULL,
    "references" jsonb NOT NULL,
    path jsonb,
    data jsonb,
    "createdAt" timestamp with time zone NOT NULL,
    CONSTRAINT stats_pkey PRIMARY KEY (id)
)
WITH (
    OIDS=FALSE
);

The type is a simple string no longer than 50 characters.

The references column is an object with an list of key values. Basically any list of simple key values, and only ever 1 level deep, the values are always strings. It could be

{
    "fruit": "plum"
    "car": "toyota"
}

or it could be

{
    "project": "2532"
}

The createdAt timestamp is not always generated from the database (but it will by default if a value isn't supplied)

I'm currently using the table with only testing data. In this data every row has a project key as a reference. So there are 3.25M rows with a project key. There are exactly 400,000 distinct values for the project reference. There are only 5 distinct values for the type field, this would probably be no more than a few hundred in production.

So I'm trying to index the table to perform the following query quickly:

SELECT
  EXTRACT(EPOCH FROM (MAX("createdAt") - MIN("createdAt"))) 
FROM
  stats
WHERE
  stats."references"::jsonb ? 'project' AND
  (
    stats."type" = 'event1' OR
    (
      stats."type" = 'event2' AND
      stats."createdAt" > '2015-11-02T00:00:00+08:00' AND
      stats."createdAt" < '2015-12-03T23:59:59+08:00'
    )
  )
GROUP BY stats."references"::jsonb->> 'project'

The query returns the time distance between two events based on two stats rows that have the same reference. In this case project. There is only ever 1 row for each type and selected reference value, but there may also be no rows in which case the result returned is 0 (this is averaged out later in a different part of a larger query).

I've created an index on the createdAt type and references columns but the query execution plan appears to be doing a full scan instead.

The index

CREATE INDEX "stats_createdAt_references_type_idx"
    ON stats
    USING btree
    ("createdAt", "references", type COLLATE pg_catalog."default");

Execution plan:

 HashAggregate  (cost=111188.31..111188.33 rows=1 width=38) 
                (actual time=714.499..714.499 rows=0 loops=1)
   Group Key: ("references" ->> 'project'::text)
      ->  Seq Scan on stats  (cost=0.00..111188.30 rows=1 width=38) 
                             (actual time=714.498..714.498 rows=0 loops=1)
          Filter: (
              (("references" ? 'project'::text) 
               AND ((type)::text = 'event1'::text)) OR 
              (((type)::text = 'event2'::text) 
               AND ("createdAt" > '2015-11-02 05:00:00+13'::timestamp with time zone) 
               AND ("createdAt" < '2015-12-04 04:59:59+13'::timestamp with time zone)))

Rows Removed by Filter: 3258680
Planning time: 0.163 ms
Execution time: 714.534 ms

I'm really not that clued up on indexing and query execution plans so if someone could point me in the right direction that'd be great.

Edit

As noted by Erwin, it appears as though even if i did have correct indexes, a table scan would still occur as the portion of the table returned from the query is very large. Does this mean that for this set of data this is the fastest query time i can get? Im assuming if i added 60M more unrelated rows without a project reference, it might use an index (if i had the correct indexes), but i dont see how that could speed up the query by adding more data. Maybe im missing something.

Best Answer

According to your current explanation indexes are not going to help much (if at all) with your current query.

So there are 3.25M rows with a project key.

That's also the total number of rows, so this predicate is true for (almost) every row ... and not selective at all. But there is no useful index for the jsonb column "references". Including it in the btree index on ("createdAt", "references", type) is just pointless.

Even if you had a generally more useful GIN index on "reference" like:

CREATE INDEX stats_references_gix ON stats USING gin ("references");

... Postgres would still have no useful statistics about individual keys inside the jsonb column.


There are only 5 distinct values for the type

Your query selects all of one type and an unknown fraction of another type. That's an estimated 20 - 40 % of all rows. A sequential scan is most certainly going to be fastest plan. Indexes start to make sense for around 5 % of all rows or less.

To test, you can force a possible index by setting for debugging purposes in your session:

SET enable_seqscan = off;

Reset with:

RESET enable_seqscan;

You'll see slower queries ...


You group by project values:

GROUP BY "references"->> 'project'

And:

There are exactly 400,000 distinct values for the project reference.

That's 8 rows per project on average. Depending on value frequencies we still have to retrieve an estimated 3 - 20 % of all rows if we only picked min and max per project in a LATERAL subquery ...

Try this index, it makes more sense than what you have now:

CREATE INDEX stats_special_idx ON stats (type, ("references" ->> 'project'), "createdAt")
WHERE "references" ? 'project';

Postgres might still fall back to a sequential scan ...

More might be done with a normalized schema / more selective criteria / a smarter query that only picks min and max "createdAt" ...

Query

I would write your query like this:

SELECT EXTRACT(EPOCH FROM (MAX("createdAt") - MIN("createdAt"))) 
FROM   stats
WHERE  "references" ? 'project'
AND   (type = 'event1' OR
       type = 'event2'
   AND "createdAt" >= '2015-11-02 00:00:00+08:00'  -- I guess you want this
   AND "createdAt" <  '2015-12-04 00:00:00+08:00'
      )
GROUP  BY "references"->> 'project';  -- don't cast

Notes

  • Don't cast here:

    stats."references"::jsonb ? 'project'

    The column is jsonb already, you gain nothing. If the predicate was selective, index usage might be prohibited by the cast.

  • Your predicates on "createdAt" are probably incorrect at lower and upper bound. To include whole days consider my suggested alternative.

  • references is a reserved word, so you have to always double-quote it. Don't use it as identifier. Similar for double-quoted CaMeL-case names like "createdAt" either. Allowed, but error prone, needless complication.

  • type

    type character varying(255) NOT NULL,

    The type is a simple string no longer than 50 characters.

    There are only 5 distinct values for the type field, this would probably be no more than a few hundred in production.

    None of this seems to make sense.

    • varchar(255) in itself hardly ever makes any sense. 255 characters is an arbitrary limit without significance in Postgres.
    • If it's no longer than 50 chars, then the limit of 255 makes even less sense.
    • In a properly normalized design you would have a small integer column type_id (referencing a small type table) which occupies only 4 bytes per row and makes indexes smaller and faster.
  • Ideally, you would have a project table, listing all projects and another small integer FK column project_id in stats. Would make any such query faster. And for selective criteria, much faster queries would be possible - even without the suggested normalization. Along these lines:

  • Optimize GROUP BY query to retrieve latest record per user