I have altered it to include hardcoded variables when I pulled this chunk of code from the stored procedure. I ran it independent of the application with the same result. I need one insert statement and one update statement to execute if the required conditions are met per record share processed. The result is that two insert statements are executing for each share processed and I don’t understand how to trace how this is happening. The first is sort of lost in a sense.
-- Declare variables for the fields of the tables.
DECLARE @CurrentMetricID Int = 3090758;
DECLARE @CurrentSharedMetricID Int;
DECLARE @NewMetricID Int = 3090778;
DECLARE @NewSharedMetricID Int;
IF (EXISTS(SELECT b.SharedMetricID
FROM Shared_Metrics b
WHERE b.MetricID = @CurrentMetricID and b.nextRecordID is NULL ))
BEGIN --METRIC PUBLISH FIRST
DECLARE cr_metric_first CURSOR FOR
SELECT sm.SharedMetricID
FROM Shared_Metrics sm
WHERE sm.MetricID = @CurrentMetricID AND sm.nextRecordID is NULL;
OPEN cr_metric_first
FETCH NEXT FROM cr_metric_first INTO @CurrentSharedMetricID;
WHILE @@FETCH_STATUS = 0
BEGIN -- Metric Publishing First
INSERT INTO Shared_Metrics(MetricID, ScorecardID, SortOrder,
MetricOrder, CategoryOrder, RptCurrentGroup,
CreatedBy, DateCreated, UpdatedBy, DateUpdated,
isDeleted,ReportPeriodStart,nextRecordID)
SELECT @NewMetricID,
NULL,
sm.sortorder,
ISNULL(sm.MetricOrder, 1),
ISNULL(sm.CategoryOrder, 1),
sm.RptCurrentGroup,
'System',
GetDate(),
NULL,
NULL,
'N',
NULL,
NULL
FROM Metric_Instance mi
INNER JOIN Shared_Metrics sm on mi.MetricID = sm.MetricID
WHERE mi.MetricID = @CurrentMetricID
AND mi.MetricDisposition <> 'Suspended'
AND mi.isDeleted <> 'Y'
AND sm.isDeleted <> 'Y';
SELECT @NewSharedMetricID = scope_identity();
-- Check for errors
IF @@ERROR <> 0
BEGIN
select 'error';
-- Determine if the cursors exists and deallocate them.
IF (SELECT CURSOR_STATUS('global','cr_metric_first')) >= -1
BEGIN
CLOSE cr_metric_first;
DEALLOCATE cr_metric_first;
END;
-- ROLLBACK TRANSACTION the transaction
ROLLBACK TRANSACTION
-- Raise an error and return
RAISERROR ('Publishing Error: Cannot insert new shared metric record for next reporting period.', 16, 1)
RETURN
END; -- of Error
-- Update with the new SharedMetricID
UPDATE Shared_Metrics
SET nextRecordID = @NewSharedMetricID
WHERE MetricID = @CurrentMetricID AND SharedMetricID = @CurrentSharedMetricID;
IF @@ERROR <> 0
BEGIN
select 'error';
-- Determine if the cursors exists and deallocate them.
IF (SELECT CURSOR_STATUS('global','cr_metric_first')) >= -1
BEGIN
CLOSE cr_metric_first;
DEALLOCATE cr_metric_first;
END;
-- ROLLBACK TRANSACTION the transaction
ROLLBACK TRANSACTION
-- Raise an error and return
RAISERROR ('Publishing Error: Cannot update shared metric NextRecordID.', 16, 1)
RETURN
END; -- of Error
FETCH NEXT FROM cr_metric_first INTO @CurrentSharedMetricID;
END;-- Metric Publishing First
CLOSE cr_metric_first;
DEALLOCATE cr_metric_first;
END; --METRIC PUBLISH FIRST
After analyzing the data, the first shared record inserted does not proceed onto the update statement. A second insert must be immediately following it and that created record is the information taken and used in the update statement. When the second share in the list is processed, the same thing happens. Since only the second record inserted is moving onto the update statement, the first is basically lost to the application. It’s key field is never linked back to the old record (Slowly Changing Dimension). When the application runs, it pulls information from this table and those records not linked back are never added to the list of metrics to display on the screen to the user. It appears that everything is functional in the application but those forgotten records that are never linked back are still being processed and carried over to the next month because the code looks them up by MetricID.
The first month the share is created and only one record exists for each shared metric record in the table. After the second month, there is now one extra record not being picked up by the application for each share processed but still taking up memory and processing power. After the third month, there are 2 records extra per record shared. After the fourth, there are 4. It grows exponentially. This is per record shared and since nothing is displayed by the application, the only reason I noticed is because the amount of records being pulled and processed, causing a result of double the intended amount, started to slow down the server. It was as if there was an endless loop because of how fast this dropped processing power. I finally caught on to why the app seemed to freeze and then timeout after the code was trying to process over 65K shares for just one shared record.
Best Answer
From what I can see you have two problems. The first is the doubling. At a guess it is because of the
SELECT
statement in yourINSERT
statement.I would be willing to bet your query is returning more than one row. Any time you have a join like that you run a risk of creating duplicate rows. The easy way to fix it is to do this:
The only real change is that that I'm putting a
DISTINCT
at the top of the query. This means that if there are anyVALID
duplicates you are still going to pick them up. You are not actually referencing any columns from Metric_Instance so you could change it from aJOIN
to aWHERE EXISTS (SELECT 1 ...
On to problem two. In your code you are using
scope_identity()
. This is only going to return the last identity value created. If you insert more than one row the previous rows are going to be missed. The best solution here is to get rid of the cursor and change this into a batch process.INSERT
all possible rows initially using the OUTPUT clause to dump a list of identity values and CurrentMetricIDs into a temp table. Then do your update using that temp table to update all of the values at once. Not only will this fix your problem but will probably run quite a bit faster as well.