I'm coming from an Oracle/PostgreSQL background and have a hard time coping with the limits of SQL Server's trigger implementation.
I want to update two different datetime
columns if the value of a different column changes. This is to record two specific status changes of the row.
Now SQL Server does neither have row level triggers, nor before update
triggers. So my understanding is that I need to join insert and deleted to find out if and which status change occurred and then run a "regular" update agains the table being updated.
My trigger currently looks like this:
create trigger ord_status_history_trigger
on jobs
AFTER UPDATE
AS
IF ( UPDATE(order_status) )
BEGIN
update jobs
set transfercomplete = case
when o.order_status = 'initial' and n.order_status = 'sent' then current_timestamp
else n.transfercomplete
end,
installcomplete = case
when o.order_status = 'delivered' n.order_status = 'installed' then current_timestamp
else n.installcomplete
end
from inserted n
join deleted o on o.jobid = n.jobid
join jobs aj on aj.jobid = n.jobid;
END
My question is: does this trigger effectively result in two updates on the table?
The first one being the triggering update and the second one being the one executed by the trigger?
Or is SQL Server smart enough to merge that into a single update to the base table (essentially the way it happens with a e.g. a BEFORE UPDATE
trigger in PostgreSQL)
Do I create a potential performance bottleneck here?
We could set those timestamps from within the application as well, but it would be nicer if we could delegate this to the database, so we can be sure this is never forgotten.
Best Answer
Your trigger runs after the updates have occurred on the table. If you update the table again in the trigger a new update is performed. See also the discussion about RECURSIVE_TRIGGERS.
If you want to run code before the update you will have to create an INSTEAD OF trigger.
Whether having to do two updates instead of one will become the bottleneck is entirely subject to the efficiency of the update in your trigger, and of course whether this UPDATE is already on the critical path or not. You probably have a primary key on job id, the pages are hot in buffer pool and you cannot have lock conflicts. So basically is only subject to a) how fast can your log accept writes and b) whether the version store (which serves the deleted and inserted pseudo-tables) can keep up.
If you ask me: such business logic status housekeeping belongs in the application, not in a trigger...