Sql-server – Trigger performance – UPDATE statement joining on INSERTED

sql servertrigger

I have a trigger (MS SQL Server) on TABLE_A that fires on an update that looks basically like the following (I've changed the names of the tables/trigger to simplify):

CREATE TRIGGER [dbo].[TABLE_A_B_UPDATE] ON [dbo].[TABLE_A] 
FOR UPDATE
AS

IF UPDATE ( [ATTRIBUTES] )
BEGIN
    BEGIN TRAN
    -- update the attributes of the table C records
    UPDATE [TABLE_C]
    SET [ATTRIBUTES] = I.ATTRIBUTES
    FROM TABLE_C C
    INNER JOIN INSERTED I ON
        C.UNIQUE_ID = I.UNIQUE_ID AND C.USER_ID = I.USER_ID
    WHERE E.[ATTRIBUTES] & 4 = 0


    UPDATE  [TABLE_B]
    SET     [ATTRIBUTES] = I.ATTRIBUTES
    FROM TABLE_C C
    INNER JOIN INSERTED I ON
        C.UNIQUE_ID = I.UNIQUE_ID AND C.USER_ID = I.USER_ID
    INNER JOIN [TABLE_B] B ON
        B.UNIQUE_ID = C.UNIQUE_ID AND B.USER_ID = C.USER_ID
    WHERE B.[ATTRIBUTES] & 4 = 0

    COMMIT TRAN
END

Currently by design, there is only ever ONE record in the INSERTED table (we are only updating one record at a time from a UI).

What we have discovered is that, as the number of records increase in TABLE_B, trigger performance degrades rapidly. For example, with around 12000 or so records in TABLE_B, this update statement takes around 40 seconds (we established a timeout of 30 seconds). As I remove records from TABLE_B, performance gradually improves. As this was an unacceptable solution, I had to find ways to improve this update statement.

Through testing/profiling, I found that the problem was with the second update statement (update TABLE_B). The first update statement works without problem; if I change the second update statement to its equivalent SELECT statement, it also runs fast.

The solution that I found was to shove the singular record in the INSERTED table into a #TEMP table and join on that instead. I was also able to do this with a table variable as well, but performance was terrible until I created an index on it. This immediately resolved the problem and the update now runs almost instantaneously.

My question is this – why did this solve the performance problem? Perhaps I am looking at this in the wrong way, but I can't imagine why I would need an index on a one record table. I have read that the INSERTED table isn't created with an index on it, but it still seems odd to me that I should need one.

Thanks in advance!

EDIT: As pointed out, I forgot to mention some other relevant table structure tidbits.

TABLE_B indeed has a compound primary key/index created as follows:

CONSTRAINT [TABLE_B] PRIMARY KEY CLUSTERED 
(
    [USER_ID] ASC,
    [UNIQUE_ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]

TABLE_C has the same index as above. All indexes were rebuilt at the start of testing.

Both tables have additional triggers that are being fired – however, during testing, I disabled these triggers to determine where specifically the performance hit was. Disabling all other triggers did not improve performance.

EXECUTION PLAN:
I'm not super savvy on execution plans for triggers, but as far as I can tell, you can view them from the profiler with the showplan option turned on. I believe this is the relevant plan:

Execution plan

I'm not sure what to think about this plan, other than I noticed that 80% on that clustered index scan. Hopefully this is what was being asked for, if not, I can repost. Thanks again!

Best Answer

as the number of records increase in TABLE_B, trigger performance degrades rapidly

That would imply the updates do not have appropriate indexes available, or are unable to use them for some reason, on TABLE_B or the tables that it is joined to.

Without more detail about the table layout (what indexes you have on each of the tables) and the query plan that SQL has decided to use we can't help you in much detail, so I suggest you update the question with that information.

Some first thoughts:

FROM TABLE_C C INNER JOIN INSERTED I ON C.UNIQUE_ID = I.UNIQUE_ID AND C.USER_ID = I.USER_ID

As you say the performance is tied to TABLE_B this is probably OK, though again the query plan would be helpful information to provide. Presumably you have an index on TABLE_C.UNIQUE_ID or it is the primary key or subject to a unique constraint.

INNER JOIN [TABLE_B] B ON B.UNIQUE_ID = C.UNIQUE_ID AND B.USER_ID = C.USER_ID

With the right indexes/keys this should not be an issue either. For maximum efficient for this specific statement TABLE_C should have an index covering UNIQUE_ID & USER_ID and TABLE_B one covering UNIQUE_ID & USER_ID and including ATTRIBUTES.

WHERE B.[ATTRIBUTES] & 4 = 0

This may be significant. Depending on how TABLE_B and TABLE_C are getting joined (again, provide the index details and query plan) this could be forcing a table scan of TABLE_B. The expense of that operation would grow at linearly with the number of rows in TABLE_B.