I need to catch errors in a transaction in a stored procedure and log them in a table in the catch block.
After starting the transaction, a loop will attempt to insert a handful of values in the table. I surround each insert statement with a try/catch block, so that if a Primary Key violation occurs on any one of the inserts, I handle the error by inserting a record in a log table. I will then commit the transaction once the loop completes and it has attempted all inserts, even if one of them failed.
I'm concerned that calling SET XACT_ABORT ON
at the beginning of the procedure will cause the transaction to be aborted when the PK error occurs, even though I'm catching and handling the error. Is that true, or does the try/catch intercept the error and suppress it in a way that the transaction is not aborted?
If try/catch doesn't stop the error from aborting the transaction, then could I call SET XACT_ABORT OFF
instead to get the error trapping behavior I need?
Here is some code to test the behavior:
First, create the target table and a log table:
CREATE TABLE ErrorTestTable (a int primary key clustered)
CREATE TABLE ErrorLogTable (m nvarchar(500), d datetime2(7))
Next, create the procedure:
CREATE PROCEDURE [dbo].[ErrorTest]
AS
BEGIN
SET XACT_ABORT ON;
SET NOCOUNT ON;
DECLARE @Try int = 0;
DECLARE @MaxTries int = 5;
BEGIN TRAN;
WHILE @Try < @MaxTries
BEGIN
BEGIN TRY
print 'begin try - @@trancount: ' + cast(@@TranCount as nvarchar(max)) + '; xact_state: ' + cast(XACT_STATE() as nvarchar(max))
INSERT INTO ErrorTestTable (a) VALUES (1)
INSERT INTO ErrorLogTable(m,d) VALUES ('successfully inserted record!', sysutcdatetime());
END TRY
BEGIN CATCH
PRINT 'begin catch - @@trancount: ' + cast(@@TranCount as nvarchar(max)) + '; xact_state: ' + cast(XACT_STATE() as nvarchar(max))
INSERT INTO ErrorLogTable(m,d) VALUES ('pk violation!', sysutcdatetime());
END CATCH
SET @Try += 1;
END
COMMIT TRAN;
END
Finally, run these statements and observe the behavior:
DELETE FROM ErrorLogTable
DELETE FROM ErrorTestTable
SELECT * FROM ErrorLogTable
SELECT * FROM ErrorTestTable
BEGIN TRY
EXEC dbo.ErrorTest;
END TRY
BEGIN CATCH
PRINT 'Error thrown by procedure.';
END CATCH
SELECT * FROM ErrorLogTable
SELECT * FROM ErrorTestTable
From testing a simple procedure, I've found a few things out. With SET XACT_ABORT ON
, when the first primary key violation occurs, execution jumps to the catch handler. At that point, the transaction according to XACT_STATE
is uncommitable (-1). Within the catch block, when I try to insert into the log table, another error is thrown and the stored procedure exits. I catch that error outside the sproc and select from the two tables, where I can then see the uncommitted data.
At that point the transaction is still not rolled back, and it's not until the whole batch is complete (including the 'select' calls after the sproc call) that it finally displays an error message saying
Uncommittable transaction is detected at the end of the batch. The transaction is rolled back.
and finally rolls back the operations
Response to comments
Why not just validate the information first, or do the error handling in the application calling the proc?
Already doing both. Not only validating, but taking out a custom application lock on the value-to-be-inserted, so lock contention is virtually non-existant, and PK-violations are basically impossible. I just want to know if I can handle the error if it were to occur. This is a very high-concurrency (100+ threads) rapid insert of "unused" random identifiers, so it has to not only check whether X random values are already in use, but settle on one to insert by attempting to take out a lock on the value. If, after successful lock acquisition, it still is not in use, I'll insert it.
Best Answer
Your problem stems from the fact that once the transaction becomes uncommittable (i.e. an error is raised) your loop does not honor the failure and continues to insert data into the table. When the DBE attempts to commit your subsequent changes it cannot because the transaction is no longer valid. Moving your
TRY/CATCH
outside of theWHILE
loop resolves part of your problem.However, once the transaction is rolled back any numbers entered during the transaction will be lost.
Also, from experience I have found problems issuing an
INSERT
statement from inside aCATCH
block. Sometimes thisINSERT
can also be rolled back with the transaction. However, as you can see from my answer there is a work-around which is to declare a table variable, insert your custom state into that and then perform the logging after you have correctly handled the transaction. Any data inserted into the table variable will still be available after the transaction is committed/rolled back.You could use this to capture the numbers that are being entered, and then when the transaction encounters the primary key violation you could reinsert the numbers from the tracked table up to the number that failed, like so:
If you want to ensure that the subsequent numbers are entered, even if one number fails you will need to add logic outside of the transaction to do it. Personally I would rewrite the stored procedure to use a set-based operation instead of the manual loop and also include logic to try and avoid the primary key violation altogether, like so: