You write:
Each customer can have multiple sites, but only one should be
displayed in this list.
Yet, your query retrieves all rows. That would be a point to optimize. But you also do not define which site
is to be picked.
Either way, it does not matter much here. Your EXPLAIN
shows only 5026 rows for the site
scan (5018 for the customer
scan). So hardly any customer actually has more than one site. Did you ANALYZE
your tables before running EXPLAIN
?
From the numbers I see in your EXPLAIN
, indexes will give you nothing for this query. Sequential table scans will be the fastest possible way. Half a second is rather slow for 5000 rows, though. Maybe your database needs some general performance tuning?
Maybe the query itself is faster, but "half a second" includes network transfer? EXPLAIN ANALYZE would tell us more.
If this query is your bottleneck, I would suggest you implement a materialized view.
After you provided more information I find that my diagnosis pretty much holds.
The query itself needs 27 ms. Not much of a problem there. "Half a second" was the kind of misunderstanding I had suspected. The slow part is the network transfer (plus ssh encoding / decoding, possibly rendering). You should only retrieve 100 rows, that would solve most of it, even if it means to execute the whole query every time.
If you go the route with a materialized view like I proposed you could add a serial number without gaps to the table plus index on it - by adding a column row_number() OVER (<your sort citeria here>) AS mv_id
.
Then you can query:
SELECT *
FROM materialized_view
WHERE mv_id >= 2700
AND mv_id < 2800;
This will perform very fast. LIMIT
/ OFFSET
cannot compete, that needs to compute the whole table before it can sort and pick 100 rows.
pgAdmin timing
When you execute a query from the query tool, the message pane shows something like:
Total query runtime: 62 ms.
And the status line shows the same time. I quote pgAdmin help about that:
The status line will show how long the last query took to complete. If
a dataset was returned, not only the elapsed time for server execution
is displayed, but also the time to retrieve the data from the server
to the Data Output page.
If you want to see the time on the server you need to use SQL EXPLAIN ANALYZE
or the built in Shift + F7
keyboard shortcut or Query -> Explain analyze
. Then, at the bottom of the explain output you get something like this:
Total runtime: 0.269 ms
Your table definition looks reasonable all over now. With all columns NOT NULL
the UNIQUE
constraint will work as expected - except for typos and minor differences in spelling, which may be rather common I am afraid. Consider @a_horse's comment.
Alternative with functional unique index
The other option would be a functional unique index (similar to what @Dave commented). But I would use a uuid
data type to optimize index size and performance.
The cast from array to text is not IMMUTABLE
(due to its generic implementation):
Hence you need a little helper function to declare it immutable:
CREATE OR REPLACE FUNCTION f_movie_uuid(_title text
, _runtime int2
, _released_in int2
, _genres text[]
, _tags text[]
, _origin text[])
RETURNS uuid LANGUAGE sql IMMUTABLE AS -- faking IMMUTABLE
'SELECT md5(_title || _runtime::text || _released_in::text
|| _genres::text || _tags::text || _origin::text)::uuid';
Use it for the index definition:
CREATE UNIQUE INDEX movies_uni_idx
ON movies (f_movie_uuid(title,runtime,released_in,genres,tags,origin));
SQL Fiddle.
More Details:
You might use the generated UUID as PK, but I would still use the serial
column with its 4 bytes, which is simple and cheap for FK references and other purposes. A UUID would be a great option for distributed systems that need to generate PK values independently. Or for very huge tables, but there aren't nearly enough movies in our solar system for that.
Pros and Cons
A unique constraint is implemented with a unique index on the involved columns. Put relevant columns in the constraint definition first and you have a useful index for other purposes as collateral benefit.
There are other specific benefits, here is a list:
The functional unique index is (potentially much) smaller in size, which can make it substantially faster. If your columns are not too big, the difference won't be much. There is also the small overhead cost for the calculation.
Concatenating all columns can introduce false positives ('foo ' || 'bar' = 'foob ' || 'ar'
, but that seems very unlikely for this case. Typos are so much more likely that you can safely ignore it here.
Uniqueness and arrays
Arrays would have to be sorted consistently to make sense in any unique arrangement relying on the =
operator because '{1,2}' <> '{2,1}'
. I suggest look-up tables for genre
, tag
and origin
with serial
PK and unique entries, which allow fuzzy search for array elements. Then:
either implement fully normalized n:m relationships that also provide referential integrity. Uniqueness of each set of references is harder to establish, you could use a MATERIALIZE VIEW
(MV) with aggregated arrays as stepping stone.
or operate with sorted arrays of FK references (which cannot yet be supported with FK constraints). Tools from the additional module intarray may come in handy:
Either way, working with arrays directly or with a normalized schema and a materialized view, searching can be very efficient with the right index and operators:
Aside
If you are using Postgres 9.4 or later consider jsonb
instead of json
.
Best Answer
Not an answer for postgres, but may be useful nevertheless to someone using Oracle who stumbles across this:
Oracle allows partial uniqueness to be enforced across multiple tables using a fast refreshing materialised view. Tom Kyte describes it here. In short, if a join produces any rows on commit, it violates a constraint on the materialised view.
Untested, but in principle it should work like so: