You can't use on delete set null
if any of the foreign key columns don't accept null
:
create schema stack;
set search_path=stack;
--
create table t1( foo integer
, bar integer
, primary key(foo,bar) );
--
create table t2( foo integer
, baz integer
, bar integer
, primary key (foo,baz)
, foreign key (foo,bar) references t1 on delete set null );
--
insert into t1 values (1,1);
insert into t2 values (1,1,1);
delete from t1;
/*
ERROR: null value in column "foo" violates not-null constraint
DETAIL: Failing row contains (null, 1, null).
CONTEXT: SQL statement "UPDATE ONLY "postgres"."t2" SET "foo" = NULL, "bar" = NULL WHERE $1 OPERATOR(pg_catalog.=) "foo" AND $2 OPERATOR(pg_catalog.=) "bar""
*/
--
drop schema stack cascade;
And the same is true for on delete set default
. I don't know of any simple way round this, and I'm not sure I agree that it is the most sensible behaviour, I think a better choice for on delete set null
would be to set the nullable columns to null if there are both.
The solution I've seen has been to remove the foreign key constraint, and to maintain relational integrity via triggers.
On the foreign (or "child") table, you need INSERT
and UPDATE
triggers. if the "foreign key" column(s) are inserted/updated, then you have to make sure the value(s) exists in the primary key of the "parent" table.
On the "parent" table, on UPDATE
of the primary key column(s), you have to locate all rows in any "child" tables (FarmEquipment
and Fields
would both be "children" of Farmers
) with the old value, and update them to the new value.
On the "parent" table, on DELETE
, you have to check the "child" tables to see if any rows use the primary key(s) in question, and delete them all.
Having worked with a vendor-supplied system that worked this way, I can't recommend it.
- As noted above, you must maintain three chunks of code for each foreign key relationship, across a minimum of two triggers (let's leave self-referential situations out of the discussion, for the moment at least).
- A "parent" table that tied to multiple "child" tables would need a check for each "child" table. While the code should be very similar for each "child" table, that can actually make modifications more difficult, as it would be easy to make a change meant to affect
FarmEquipment
to Fields
by mistake. (In the real-world example I mentioned, there was at least one parent table with at least 20 "FK" relationships - sometimes having multiple links in the same child table, to different fields.
Documentation of the relationship is no longer built-in. While you can't (to the best of my knowledge) right-click a table and find all foreign key relationships that point to it, it's easy to find a query to retrieve this information from SQL Server's structural tables. When the relationship is maintained via triggers, you pretty much have to go through the triggers manually to identify all the relationships, and the precise definition (the "parent" table's DELETE
trigger is where you can determine if the relationship is meant to be CASCADE
, NO ACTION
, SET NULL
, or SET DEFAULT
- with foreign key constraints, this can be seen from the "child" table).
There are workarounds (the trigger code does exist in a queryable form), but they require that the developers follow strict rules regarding naming conventions, the construction of the query, and possibly even the formatting of the code. Even if one person is maintaining the code, and is using a template to set everything up, keeping everything exactly so as changes are made and bugs are found and fixed becomes much more difficult.
The system I worked with did work, with this method, for more than 10 years; they'd started their app in an environment that didn't have effective foreign keys. But even they had gotten to the point where they were using foreign keys for new tables, and starting to put them in place on existing tables where possible. It's just that working with it wasn't easy.
I suspect this wouldn't meet your criteria; however, as it is a viable solution, I thought it was at least worth covering.
Best Answer
This question "what can go wrong" can only be answered by the developers at Microsoft or Sybase. When there are multiple cascade paths to the same record, it's possible that the code might attempt to delete it multiple times. If it isn't constructed to deal with the possibility that the record to be deleted has already been deleted, it might throw an error. Apparently this problem actually exists (or is thought to exist!), so instead of fixing it, the implementation avoids it by preventing the definition of duplicate cascade paths. It's clearly a short-cut, because other DBMS have no problem with this situation.