The row-versioning framework introduced in SQL Server 2005 is used to support a number of features, including the new transaction isolation levels READ_COMMITTED_SNAPSHOT
and SNAPSHOT
. Even when neither of these isolation levels are enabled, row-versioning is still used for AFTER
triggers (to facilitate generation of the inserted
and deleted
pseudo-tables), MARS, and (in a separate version store) online indexing.
As documented, the engine may add a 14-byte postfix to each row of a table that is versioned for any of these purposes. This behaviour is relatively well-known, as is the addition of the 14-byte data to every row of an index that is rebuilt online with a row-versioning isolation level enabled. Even where the isolation levels are not enabled, one extra byte is added to non-clustered indexes only when rebuilt ONLINE
.
Where an AFTER trigger is present, and versioning would otherwise add 14 bytes per row, an optimization exists within the engine to avoid this, but where a ROW_OVERFLOW
or LOB
allocation cannot occur. In practice, this means the maximum possible size of a row must be less than 8060 bytes. In calculating maximum possible row sizes, the engine assumes for example that a VARCHAR(460) column could contain 460 characters.
The behaviour is easiest to see with an AFTER UPDATE
trigger, though the same principle applies to AFTER DELETE
. The following script creates a table with a maximum in-row length of 8060 bytes. The data fits on a single page, with 13 bytes of free space on that page. A no-op trigger exists, so the page is split and versioning information added:
USE Sandpit;
GO
CREATE TABLE dbo.Example
(
ID integer NOT NULL IDENTITY(1,1),
Value integer NOT NULL,
Padding1 char(42) NULL,
Padding2 varchar(8000) NULL,
CONSTRAINT PK_Example_ID
PRIMARY KEY CLUSTERED (ID)
);
GO
WITH
N1 AS (SELECT 1 AS n UNION ALL SELECT 1),
N2 AS (SELECT L.n FROM N1 AS L CROSS JOIN N1 AS R),
N3 AS (SELECT L.n FROM N2 AS L CROSS JOIN N2 AS R),
N4 AS (SELECT L.n FROM N3 AS L CROSS JOIN N3 AS R)
INSERT TOP (137) dbo.Example
(Value)
SELECT
ROW_NUMBER() OVER (ORDER BY (SELECT 0))
FROM N4;
GO
ALTER INDEX PK_Example_ID
ON dbo.Example
REBUILD WITH (FILLFACTOR = 100);
GO
SELECT
ddips.index_type_desc,
ddips.alloc_unit_type_desc,
ddips.index_level,
ddips.page_count,
ddips.record_count,
ddips.max_record_size_in_bytes
FROM sys.dm_db_index_physical_stats(DB_ID(), OBJECT_ID(N'dbo.Example', N'U'), 1, 1, 'DETAILED') AS ddips
WHERE
ddips.index_level = 0;
GO
CREATE TRIGGER ExampleTrigger
ON dbo.Example
AFTER DELETE, UPDATE
AS RETURN;
GO
UPDATE dbo.Example
SET Value = -Value
WHERE ID = 1;
GO
SELECT
ddips.index_type_desc,
ddips.alloc_unit_type_desc,
ddips.index_level,
ddips.page_count,
ddips.record_count,
ddips.max_record_size_in_bytes
FROM sys.dm_db_index_physical_stats(DB_ID(), OBJECT_ID(N'dbo.Example', N'U'), 1, 1, 'DETAILED') AS ddips
WHERE
ddips.index_level = 0;
GO
DROP TABLE dbo.Example;
The script produces the output shown below. The single-page table is split into two pages, and the maximum physical row length has increased from 57 to 71 bytes (= +14 bytes for the row-versioning information).
DBCC PAGE
shows that the single updated row has Record Attributes = NULL_BITMAP VERSIONING_INFO Record Size = 71
, whereas all other rows in the table have Record Attributes = NULL_BITMAP; record Size = 57
.
The same script, with the UPDATE
replaced by a single row DELETE
produces the output shown:
DELETE dbo.Example
WHERE ID = 1;
There is one fewer row in total (of course!), but the maximum physical row size has not increased. Row versioning information is only added to rows needed for the trigger pseudo-tables, and that row was ultimately deleted. The page split remains, however. This page-splitting activity is responsible for the slow performance observed when the trigger was present. If the definition of the Padding2
column is changed from varchar(8000)
to varchar(7999)
, the page no longer splits.
Also see this blog post by SQL Server MVP Dmitri Korotkevitch, which also discusses the impact on fragmentation.
After a considerable amount of testing, I finally discovered the reason behind this error. The client connection explicitly set ANSI_WARNINGS and CONCAT_NULL_YIELDS_NULL OFF. XML data operations, such as @data.value('(/EVENT_INSTANCE/EventType)[1]', 'nvarchar(100)'), require both to be ON. I had attempted to override these within the trigger, but I may have placed them wrong. The final code below works, even with the explicit SET options in the connections from Great Plains:
CREATE TRIGGER [ddl_log]
ON ALL SERVER
FOR DDL_DATABASE_LEVEL_EVENTS, DDL_SERVER_LEVEL_EVENTS
AS
BEGIN
SET NOCOUNT ON;
SET ANSI_WARNINGS, CONCAT_NULL_YIELDS_NULL ON;
DECLARE @data XML
SET @data = EVENTDATA()
EXECUTE AS LOGIN='<dummy login>'
INSERT admin.dbo.ddl_audit (PostTime, DB_User, [Event], [TSQL], Host, DatabaseName)
VALUES (
GETDATE(),
CONVERT(NVARCHAR(100), ORIGINAL_LOGIN()),
@data.value('(/EVENT_INSTANCE/EventType)[1]', 'nvarchar(100)'),
@data.value('(/EVENT_INSTANCE/TSQLCommand)[1]', 'nvarchar(2000)'),
CONVERT(NVARCHAR(100), HOST_NAME()),
@data.value('(/EVENT_INSTANCE/DatabaseName)[1]','nvarchar(100)')
) ;
REVERT
END
As an alternative, I also could have simply inserted EVENTDATA() as an XML LOB into a table, rather than parsing it out into columns. Because I would not be manipulating the XML, the SET options do not matter. Then I would just build an XML index for querying performance, and construct a view to use for audit log reporting that parses the XML in the view definition, in the same manner as I am doing above in my INSERT statement.
Thanks to Max for pointing me in a different research direction, and @AaronBertrand on #sqlhelp who helped me with correct SET options within the body of the trigger.
Best Answer
You have three options:
Disable "nested triggers" using
sp_configure
. This is a server/instance -level option, so it might disable functionality in other areas where you might have triggers calling other triggers. But it should at least be mentioned. You can see the current setting using the following:And you can disable this ability using the following:
You can read more about this setting here: Configure the nested triggers Server Configuration Option.
Have each trigger detect how many levels deep they are using TRIGGER_NESTLEVEL function. Use this at the beginning of each trigger to simply exit if it is not the 1st or 2nd trigger execution in the stack. Something along the lines of:
This will require a little bit of testing to see how it is affected by the initial insert being done by yet another trigger (in case that ever becomes an issue, but it might not). If it doesn't work as expected when called by another trigger, then try setting some of the parameters to this function. Please see the documentation (linked above) for details.
Set a flag in the session-based "context info" using SET CONTEXT_INFO. Context info is a
VARBINARY(128)
value that exists at the session level and retains its value until overwritten or until the session ends. The value can be retrieved either by using the CONTEXT_INFO function or selecting thecontext_info
column from either of the following DMVs: sys.dm_exec_requests and sys.dm_exec_sessions.You could place the following at the beginning of each of those triggers:
This option doesn't work so well if you are already using Context Info for some other reason. But, anyone using SQL Server 2016 can make use of SESSION_CONTEXT, which is a new session-based set of key-value pairs.