I'm attempting to build an all-or-nothing TSQL statement, something like this:
BEGIN TRANSACTION
BEGIN TRY
-- Loop over a TVP passed as a param
DECLARE i CURSOR LOCAL FAST_FORWARD FOR SELECT [stmt] FROM TVP param;
OPEN i;
FETCH NEXT FROM i INTO @iDB, @iTBL, @iATTRS, @iVALS;
WHILE @@FETCH_STATUS = 0
BEGIN
SET @SQL = next row of TVP
EXECUTE sys.sp_executesql @SQL
END
CLOSE i;
DEALLOCATE i;
COMMIT TRANSACTION
END TRY
BEGIN CATCH
SELECT
ERROR_NUMBER() AS ErrorNumber,
ERROR_SEVERITY() AS ErrorSeverity,
ERROR_STATE() AS ErrorState,
ERROR_PROCEDURE() AS ErrorProcedure,
ERROR_LINE() AS ErrorLine,
ERROR_MESSAGE() AS ErrorMessage;
ROLLBACK TRANSACTION
END CATCH
The SP generally works as intended, in that the transactions are all or nothing, and running the SP from SSMS returns the error as selected in the catch statement.
But on the C# side, the dbResponse.RecordsAffected
value is always incremented.
Why is this? Also, how can I get this SP to return the correct rows effected value? (i.e. the number of rows that were effected by the SP on success)
Is it possible to reset this value in the CATCH statement?
Best Answer
That's just a total of the row count messages returned from SQL Server. These are sent as queries are run, before the transaction has been committed or rolled back.
Keep track of the row count in a variable, and return that to the client with an output parameter or a result set.
As @Kin suggested, you can SET NOCOUNT ON and no rowcount messages will be sent to the client. Then after you COMMIT you can turn NOCOUNT OFF and run a query to that generates the desired rowcount message. Perhaps something cheezy like: