PostgreSQL – Select Unique IDs Where Second Column is Unique

greatest-n-per-grouppostgresqlselect

Look at the following example starting from the top row (id=9) and work your way down, selecting a limit of 4 rows that have sec's that we have not yet seen. We "select" id=9 because we don't yet have sec=1. We continue to work our way down like this, but when we get to id=7 we skip it because we already have sec=5 (from row with id=8). We continue in the same manner, and we finally stop at id=3 because we have accumulated 4 rows (our desired limit).

 id | sec
----+-----
  9 |   1  <- 1
  8 |   5  <- 2
  7 |   5  # skip, already have sec=5
  6 |   4  <- 3
  5 |   1  # skip, already have sec=1
  4 |   1  # skip, already have sec=1
  3 |   3  <- 4
  2 |   2
  1 |   1

Of course the SQL algorithm can (will!) be different than I described.

Desired result:

 id
----
  9
  8
  6
  3
(4 rows)

If I wanted to increase the limit to 5 rows, then the row with id=2 would be included in the results. However, if I increased the limit to 6 rows, the row with id=1 would not be added because sec=1 has already been seen.

Note: Though it shouldn't matter, I am on PostgreSQL 9.3.1.


In case you want to quickly build the table to test this out:

CREATE TABLE my_table (id serial primary key, sec integer DEFAULT 0 NOT NULL);
INSERT INTO my_table (sec) VALUES
  (1)
, (2)
, (3)
, (1)
, (1)
, (4)
, (5)
, (5)
, (1);
CREATE INDEX index_my_table_on_sec ON my_table (sec);

Best Answer

In Postgres, this is simpler with DISTINCT ON:

SELECT *
FROM (
   SELECT DISTINCT ON (sec)
          id, sec
   FROM   tbl
   ORDER  BY sec, id DESC
   ) sub
ORDER  BY id DESC
LIMIT  4;

Detailed explanation in this related answer on SO:

For a big table and small LIMIT, neither this nor @a_horse's solution are very efficient. The subquery will plough through the whole table, wasting a lot of time ...

Recursive CTE

I have tried and failed to solve similar problems with a recursive CTE in the past and resorted to a procedural solution with PL/pgSQL. Example:

Finally, here is a working rCTE:

WITH RECURSIVE cte AS (
   (  -- parentheses required
   SELECT id, '{}'::int[] AS last_arr, ARRAY[sec] AS arr
   FROM   tbl
   ORDER  BY id DESC
   LIMIT  1
   )
   UNION ALL
   (
   SELECT b.id, c.arr
        , CASE WHEN b.sec = ANY (c.arr) THEN c.arr ELSE b.sec  || c.arr END
   FROM   cte c
   JOIN   tbl b ON b.id < c.id
   WHERE  array_length(c.arr, 1) < 4
   ORDER  BY id DESC
   LIMIT  1
   )
   )
SELECT id, arr[1] AS sec
FROM   cte
WHERE  last_arr <> arr;

It's not as fast or elegant as I had hoped for and not nearly as fast as the function below, but faster than the query above in my tests.

PL/pgSQL function

Fastest by far:

CREATE OR REPLACE FUNCTION f_first_uniq(_rows int)
   RETURNS TABLE (id int, sec int) AS
$func$
DECLARE
   _arr int[];
BEGIN
   FOR id, sec IN
      SELECT t.id, t.sec FROM tbl t ORDER BY t.id DESC
   LOOP
      IF sec = ANY (_arr) THEN
         -- do nothing
      ELSE
         RETURN NEXT;
         _arr := _arr || sec;
         EXIT WHEN array_length(_arr, 1) >= _rows;
      END IF;
   END LOOP;
END
$func$  LANGUAGE plpgsql;

Call:

SELECT * FROM f_first_uniq(4);

SQL Fiddle demonstrating all three.

Could be made out to work for any table with table and column names as parameters and dynamic SQL with EXECUTE ...

Why bother?

In a test table with only 30k rows the function ran 2000x faster than the above query (which already ran ~ 30% faster than a_horse's version). This difference grows with the size of the table. Performance of the function is about constant, while the query's performance gets progressively worse, since it tries to find distinct values in all of the table first. Try this in a table with a million rows ...