You can do that in pure SQL. Create a partial unique index in addition to the one you have:
CREATE UNIQUE INDEX ab_c_null_idx ON my_table (id_A, id_B) WHERE id_C IS NULL;
This way you can enter for (id_A, id_B, id_C)
in your table:
(1, 2, 1)
(1, 2, 2)
(1, 2, NULL)
But none of these a second time.
Or use two partial UNIQUE
indexes and no complete index (or constraint). The best solution depends on the details of your requirements. Compare:
While this is elegant and efficient for a single nullable column in the UNIQUE
index, it gets out of hand quickly for more than one. Discussing this - and how to use UPSERT with partial indexes:
Asides
No use for mixed case identifiers without double quotes in PostgreSQL.
You might consider a serial
column as primary key or an IDENTITY
column in Postgres 10 or later. Related:
So:
CREATE TABLE my_table (
my_table_id bigint GENERATED BY DEFAULT AS IDENTITY PRIMARY KEY -- for pg 10+
-- my_table_id bigserial PRIMARY KEY -- for pg 9.6 or older
, id_a int8 NOT NULL
, id_b int8 NOT NULL
, id_c int8
, CONSTRAINT u_constraint UNIQUE (id_a, id_b, id_c)
);
If you don't expect more than 2 billion rows (> 2147483647) over the lifetime of your table (including waste and deleted rows), consider integer
(4 bytes) instead of bigint
(8 bytes).
(I try to compile an answer from the comments.)
It looks like there is a problem in the process with which your application generates the value for the primary key. Generally it is wiser to leave this to the DBMS: define a sequence and leave it fill the PK values. If you have a sequence, you have at least to options: either set the value like
$eventid = exec(SELECT nextval('your_sequence')) // pseudocode!
and use this in the INSERT
statement, or completely leave it to PostgreSQL by omitting PK from the INSERT
:
INSERT INTO events (source, object, [...])
VALUES (2, 3, [...]);
(Personally I usually opt for the second solution.)
You can change to this behaviour by setting the data type of your PK column to bigserial
(in case of an integer
, it would be just serial
):
ALTER TABLE events ALTER COLUMN eventid TYPE bigserial;
This will raise a notice which tells you that a sequence called (probably) events_eventid_seq
is generated. You can use it as described above.
EDIT (2012-11-27)
You can set the sequence by a query like
SELECT setval('events_eventid_seq', (SELECT max(eventid) FROM events));
In order to let the sequence produce the next value for eventid
, you must not set a value explicitly in your INSERT
. If you do, it will take the manually supplied value and won't advance the sequence. This will cause the same unique violation error later.
Regarding ALTER TABLE
: as you could see, there is some discrepancy between ALTER TABLE
- type does not exists and CREATE TABLE
. Are you sure you try this on the same database? I can't remember the older versions, but already 8.3 had the bigserial
shorthand (it's not a real type). (Giving it further thought, it is quite possible you can't use it in ALTER TABLE
, since it is a shorthand...) You can solve the problem by following the manual:
CREATE SEQUENCE events_eventid_seq;
ALTER TABLE events
ALTER COLUMN eventid TYPE bigint;
ALTER SEQUENCE events_eventid_seq OWNED BY events.eventid;
SELECT setval('events_eventid_seq', (SELECT max(eventid) FROM events));
Best Answer
A partial index like suggested by @ypercube is likely the best solution. If you for some reason would like to keep it at the logical level you can generate a column:
If ACTIVE: concatenate the "partially unique" columns, I used a token separator to distinguish columns, result of 'a' || 'bc' should be different from 'ab' || 'c'
If NOT ACTIVE: use the primary key instead of the "unique" columns
I used xyz as the name of the generated column because I did not know better:-)