I have a table representing movies. The fields are:
id (PK), title, genre, runtime, released_in, tags, origin, downloads
.
My database cannot be polluted by duplicated rows, so I want to enforce uniqueness. The problem is that different movies could have the same title, or even the same fields except tags
and downloads
. How to enforce uniqueness?
I thought of two ways:
- make all the fields except
downloads
primary key. I'm keepingdownloads
out since it's JSON and it will probably impact the performance. - keep only
id
as primary key, but add a unique constraint with all the other columns (except, again,downloads
).
I read this question which is very similar, but I didn't quite understand what should I do. Currently this table is not related to any other tables, but in the future could be.
At the moment I have slightly less than 20,000 records, but I expect the number to grow. I don't know if this is somewhat relevant to the issue.
EDIT: I modified the schema and here is how I would create the table:
CREATE TABLE movies (
id serial PRIMARY KEY,
title text NOT NULL,
runtime smallint NOT NULL CHECK (runtime >= 0),
released_in smallint NOT NULL CHECK (released_in > 0),
genres text[] NOT NULL default ARRAY[]::text[],
tags text[] NOT NULL default ARRAY[]::text[],
origin text[] NOT NULL default ARRAY[]::text[],
downloads json NOT NULL,
inserted_at timestamp NOT NULL default current_timestamp,
CONSTRAINT must_be_unique UNIQUE(title,runtime,released_in,genres,tags,origin)
);
I also added the timestamp
column, but that is not a problem as I won't touch it. So it will always be automatic and unique.
Best Answer
Your table definition looks reasonable all over now. With all columns
NOT NULL
theUNIQUE
constraint will work as expected - except for typos and minor differences in spelling, which may be rather common I am afraid. Consider @a_horse's comment.Alternative with functional unique index
The other option would be a functional unique index (similar to what @Dave commented). But I would use a
uuid
data type to optimize index size and performance.The cast from array to text is not
IMMUTABLE
(due to its generic implementation):Hence you need a little helper function to declare it immutable:
Use it for the index definition:
SQL Fiddle.
More Details:
You might use the generated UUID as PK, but I would still use the
serial
column with its 4 bytes, which is simple and cheap for FK references and other purposes. A UUID would be a great option for distributed systems that need to generate PK values independently. Or for very huge tables, but there aren't nearly enough movies in our solar system for that.Pros and Cons
A unique constraint is implemented with a unique index on the involved columns. Put relevant columns in the constraint definition first and you have a useful index for other purposes as collateral benefit.
There are other specific benefits, here is a list:
The functional unique index is (potentially much) smaller in size, which can make it substantially faster. If your columns are not too big, the difference won't be much. There is also the small overhead cost for the calculation.
Concatenating all columns can introduce false positives (
'foo ' || 'bar' = 'foob ' || 'ar'
, but that seems very unlikely for this case. Typos are so much more likely that you can safely ignore it here.Uniqueness and arrays
Arrays would have to be sorted consistently to make sense in any unique arrangement relying on the
=
operator because'{1,2}' <> '{2,1}'
. I suggest look-up tables forgenre
,tag
andorigin
withserial
PK and unique entries, which allow fuzzy search for array elements. Then:either implement fully normalized n:m relationships that also provide referential integrity. Uniqueness of each set of references is harder to establish, you could use a
MATERIALIZE VIEW
(MV) with aggregated arrays as stepping stone.or operate with sorted arrays of FK references (which cannot yet be supported with FK constraints). Tools from the additional module intarray may come in handy:
Either way, working with arrays directly or with a normalized schema and a materialized view, searching can be very efficient with the right index and operators:
If you are using Postgres 9.4 or later consider
jsonb
instead ofjson
.