Sql-server – Triggers vs. Jobs in SQL Server

jobssql serversql-server-2012trigger

I’m designing a database with a table that automatically gets loaded with a new record every 15 seconds (each record will have a 'timestamp' field and around 50 ‘float’ fields). Each record in this table needs to be copied to another table. I have done some initial testing with two different approaches to do this:

  1. Use a trigger to insert each record into the other table (will fire every time a record is added to the source table – every 15 seconds)

  2. Use a job (SQL Server Agent) to run every ten minutes or so and insert a batch of records into the other table (and use a flag field in the source table to indicate which records have been processed).

I don’t have very much experience with triggers or SQL Server jobs so I don’t know the impact regarding overhead and system resources (over time, both tables will have a lot of records).

In general, what approach would be better? Is there another approach I should consider?

Thanks

Best Answer

When inserting one record each 15 seconds I would not be worried about using triggers.

Consider these two heap tables, with one rowversion column and 50 float columns each.

CREATE TABLE dbo.floattable1
(Rowver ROWVERSION
,Float1    float
,Float2    float
...);
CREATE TABLE dbo.floattable2
(Rowver ROWVERSION
,Float1    float
,Float2    float
...);

Both tables have ~2M rows (1940480).

Solution with a trigger

When we create an AFTER INSERT TRIGGER on dbo.floattable1 to also insert into dbo.floattable2

CREATE TRIGGER dbo.floattrigger
ON dbo.FloatTable1
AFTER INSERT
AS
INSERT INTO dbo.floattable2(Float1    ,Float2     ,Float3     ,Float4     ,Float5     ,Float6     ,Float7     ,Float8     ,Float9     
,Float10  ,Float11  ,Float12  ,Float13  ,Float14  ,Float15  ,Float16  ,Float17  ,Float18  ,Float19  ,Float20  ,Float21  ,Float22  
,Float23  ,Float24  ,Float25  ,Float26  ,Float27  ,Float28  ,Float29  ,Float30  ,Float31  ,Float32  ,Float33  ,Float34  ,Float35  
,Float36  ,Float37  ,Float38  ,Float39  ,Float40  ,Float41  ,Float42  ,Float43  ,Float44  ,Float45  ,Float46  ,Float47  ,Float48  ,Float49  ,Float50 )
SELECT Float1     ,Float2     ,Float3     ,Float4     ,Float5     ,Float6     ,Float7     ,Float8     ,Float9     ,Float10  ,Float11  ,Float12  ,Float13  
,Float14  ,Float15  ,Float16  ,Float17  ,Float18  ,Float19  ,Float20 ,Float21  ,Float22  ,Float23  ,Float24  ,Float25  ,Float26  ,Float27  ,Float28  ,Float29  
,Float30  ,Float31  ,Float32  ,Float33  ,Float34  ,Float35  ,Float36  ,Float37  ,Float38  ,Float39  
,Float40  ,Float41  ,Float42  ,Float43  ,Float44  ,Float45  ,Float46  ,Float47  ,Float48  ,Float49  ,Float50 
FROM inserted;

Afterwards, we insert one whole row in dbo.floattable1 resulting in trigger execution:

enter image description here

And a very simple plan with 1 logical read.

Solution for the job / FlagColumn

Drop the trigger, add a flagcolumn, set it to 1 and add an index on it.

DROP TRIGGER  dbo.floattrigger;
ALTER TABLE dbo.floattable1 
ADD FlagColumn bit;
UPDATE dbo.floattable1 SET FlagColumn = 1;

CREATE INDEX IX_FlagColumn on dbo.floattable1(FlagColumn);

Insert 40 test records with flagcolumn = 0

SET STATISTICS IO, TIME ON;
INSERT INTO dbo.floattable1(Float1    ,Float2     ,... ,FlagColumn)
SELECT TOP(40) Float1     ,Float2     ,... , 0
FROM dbo.floattable1

Insert into dbo.floattable2 and update the flagcolumns

BEGIN TRANSACTION
INSERT INTO dbo.floattable2
(Float1   ,Float2     ,...)
SELECT Float1     ,Float2     ,...
FROM dbo.floattable1 
WHERE FlagColumn = 0;

UPDATE dbo.floattable1 
SET FlagColumn = 1
WHERE FlagColumn = 0;
COMMIT TRANSACTION

The first insert can be improved to be equal to the trigger's insert, by removing the RID lookup (adding all floats to the included column list of the index).

enter image description here

But the update will give some extra overhead.

enter image description here.

The advantage you have here is that you can schedule this when you want, and have some more information on your data via the flag. The extra index means extra overhead and Insert + Update means extra overhead than only issueing an Insert.


Both ideas are viable, but it will really depend on how complex / simple the table is used. If it is only used as a logging table, and the trigger will fire for one record each 15 seconds (and nothing else), the trigger should be fine. If the data sync does not need to happen that fast / often, data load with an INSERT + UPDATE should be fine too. YMMV, always test before implementing.

Related Question