This query will do. The trick is to use COUNT(DISTINCT city_id)
:
SQL Fiddle
PostgreSQL 8.3.20 Schema Setup:
CREATE TABLE Table1
("id" int, "name" varchar(6), "language" varchar(1), "dialect" varchar(2), "city_id" int)
;
INSERT INTO Table1
("id", "name", "language", "dialect", "city_id")
VALUES
(01, 'London', 'A', 'A1', 1),
(02, 'London', 'A', 'A2', 1),
(03, 'London', 'B', 'B1', 2),
(04, 'London', 'B', 'B2', 3)
;
Query 1:
select t.*, d.dups
FROM table1 t INNER JOIN
(
select name, language, count(distinct city_id) as dups
from table1
group by name, language
having count(distinct city_id) > 1
) d
ON t.name = d.name and t.language = d.language
Results:
| ID | NAME | LANGUAGE | DIALECT | CITY_ID | DUPS |
|----|--------|----------|---------|---------|------|
| 3 | London | B | B1 | 2 | 2 |
| 4 | London | B | B2 | 3 | 2 |
Idea 1
Judging by their names, the columns "denormalizedData"
and "hugeText"
seem to be comparatively big, probably many times as big as the columns involved in your query. Size matters for big queries like this. Very big values (> 2kb) for text
or jsonb
get "toasted", which can avert the worst. But even the remainder or smaller values stored inline can be several times as big as the columns relevant to your query, which span around 100 bytes.
Related:
Splitting columns relevant to the query into a separate 1:1 table might go a long way. (Depends on the complete situation. You add some storage overhead for another row header and another PK and writing to the tables gets a bit more complicated and expensive.)
Idea 2
Also (like you confirmed) only 4 columns are relevant to determine the top 50.
You might have an angle there for a much smaller materialized view (MV) containing just those columns plus "timestampCol"
and "textCol"
and only the "last 2 weeks" or "last month"
of data. Run a fast query on the MV to identify the top 50 "textCol"
and only retrieve those rows from the big table. Or, to be precise, just the additional columns not contained in your MV - you get sums for those in the first step.
You only need an index on ("textCol")
for the big table. And another one on ("timestampCol")
for the MV - which would only be used for instances of your query with a selective WHERE
clause. Else, it will be cheaper to sequentially scan the whole MV.
If many of your queries cover the same period of time, you might go one step further: only save one row per "textCol"
in the MV with pre-aggregated sums (maybe two or more MV for a couple of frequently used time periods). You get the idea. That should be much faster, yet.
You might even create the MVs with the whole result set and refresh before the first new query for the day.
Depending on exact numbers, you might combine both ideas.
Best Answer
It stores what you inserted. Normalize it.
PostgreSQL may store larger
text
andvarchar
values out-of-line in a side table, possibly compressed. But not small labels.BTW,
varchar(255)
has no effect in postgres, it's the same astext
with a lengthcheck
constraint.