From what I can see you have two problems. The first is the doubling. At a guess it is because of the SELECT
statement in your INSERT
statement.
SELECT @NewMetricID,
NULL,
sm.sortorder,
ISNULL(sm.MetricOrder, 1),
ISNULL(sm.CategoryOrder, 1),
sm.RptCurrentGroup,
'System',
GetDate(),
NULL,
NULL,
'N',
NULL,
NULL
FROM Metric_Instance mi
INNER JOIN Shared_Metrics sm on mi.MetricID = sm.MetricID
WHERE mi.MetricID = @CurrentMetricID
AND mi.MetricDisposition <> 'Suspended'
AND mi.isDeleted <> 'Y'
AND sm.isDeleted <> 'Y';
I would be willing to bet your query is returning more than one row. Any time you have a join like that you run a risk of creating duplicate rows. The easy way to fix it is to do this:
SELECT DISTINCT @NewMetricID,
NULL,
sm.sortorder,
ISNULL(sm.MetricOrder, 1),
...
...
...
The only real change is that that I'm putting a DISTINCT
at the top of the query. This means that if there are any VALID
duplicates you are still going to pick them up. You are not actually referencing any columns from Metric_Instance so you could change it from a JOIN
to a WHERE EXISTS (SELECT 1 ...
On to problem two. In your code you are using scope_identity()
. This is only going to return the last identity value created. If you insert more than one row the previous rows are going to be missed. The best solution here is to get rid of the cursor and change this into a batch process. INSERT
all possible rows initially using the OUTPUT clause to dump a list of identity values and CurrentMetricIDs into a temp table. Then do your update using that temp table to update all of the values at once. Not only will this fix your problem but will probably run quite a bit faster as well.
hi Linked servers in general not best practice. I generally recommend against them. To many reasons to explain here....this would be come a white paper. hahahah I have been doing ETL and data security for 25 years. please consult an architect and don't ask people on internet claiming to be architect. there are very few people out there with my level of experience. there is never one answer either...it depends on many factors how you approach this. AND testing small amount of records is not valid test. do more intense testing. best of luck!
Best Answer
When inserting one record each 15 seconds I would not be worried about using triggers.
Consider these two heap tables, with one
rowversion
column and 50float
columns each.Both tables have ~2M rows (1940480).
Solution with a trigger
When we create an
AFTER INSERT TRIGGER
ondbo.floattable1
to also insert intodbo.floattable2
Afterwards, we insert one whole row in
dbo.floattable1
resulting in trigger execution:And a very simple plan with 1 logical read.
Solution for the job / FlagColumn
Drop the trigger, add a flagcolumn, set it to 1 and add an index on it.
Insert 40 test records with flagcolumn = 0
Insert into dbo.floattable2 and update the flagcolumns
The first insert can be improved to be equal to the trigger's insert, by removing the RID lookup (adding all floats to the included column list of the index).
But the update will give some extra overhead.
.
The advantage you have here is that you can schedule this when you want, and have some more information on your data via the flag. The extra index means extra overhead and
Insert
+Update
means extra overhead than only issueing anInsert
.Both ideas are viable, but it will really depend on how complex / simple the table is used. If it is only used as a logging table, and the trigger will fire for one record each 15 seconds (and nothing else), the trigger should be fine. If the data sync does not need to happen that fast / often, data load with an
INSERT
+UPDATE
should be fine too. YMMV, always test before implementing.