Sql-server – Why Table Lock Escalation is happening in the scenario

lock-escalationlockingsql-server-2017

I have a table Table1 that is updated in small chunks using below query:

update top (1000) Table1
    set VarcharColumn1 = 'SomeValue'
from Table1
where ID in (select ID from Table2)
      and VarcharColumn1 is NULL

Additional details:

Table2 has 90000 rows, and total number of rows that need to be updated in Table1, is also 90000 (1 to 1 relationship)

Also when Table1 rows are updated, there is a trigger on Table1 that inserts rows as they were before update, to table Table1History
So this means when I update 1000 rows in Table1, 1000 rows inserted into Table1History

Question:

When I update top 100 rows, no Table Lock Escalation happens

I monitor this using Extended Events "lock_escalation" event, and also in Performance Monitor - SQLServer:Access Methods - Table Lock Escalations / sec

When I update top 1000 or 500 rows, Table Lock Escalation IS happening on Table1

So I wonder, what is the mechanism or formula that is used by SQL Server     
to escalate locking to table level ?  

In google it says 5000 rows is the threshold, but obviously in my case 1000 or 500 rows is enough to cause Table Lock Escalation on Table1

extended events lock_escalation

Best Answer

Lock escalation is not exactly based on 5k rows, but rather a number of locks pr. level in the locking hierarchy. Once you hit the threshold, locks escalate to the next object level, until you eventually reach table-level which usually has a negative impact on the application. To work around this the easiest, consider how many rows you are affecting in the DELETE TOP() and how many pages that correlates to. The amount of rows in a page is defined by with width of the row. If you have many columns (requiring lots of byte-storage), your row-width is bigger, then a table with fewer columns. A page can contain roughly 8k-bytes of data.

"Books Online states that the threshold for a lock escalation is reached when the statement has acquired in excess of 5,000 row or page level locks on a single instance of an object. Also some blogs (which I came across in the Internet) just take 5000 value as a threshold point and not consider other parameters. When rows exceed 5000 an attempt is made to convert a higher-level lock on the object (either an IS or an IX lock) to an S lock or an X lock on the table. If this fails, perhaps because the table is being used by someone else, then SQL Server will back off and try again if it acquires another 1,250 locks on the same instance of the object. Remember that all locks attained must originate from the same object within the statement to qualify for exceeding the lock threshold, so 4,000 locks from tableA that is self-joined (i.e., another set of 4,000 locks) would not qualify." [taken from book "Professional SQL Server 2008 Internals and Troubleshooting" by Christian Bolton et al. published by Wrox ]

Good info in the page-object in SQL Server: https://www.sqlskills.com/blogs/paul/inside-the-storage-engine-anatomy-of-a-page/

There is an option to disable lock escalation as part of your table-definition. Buts it not advisable in most cases. I'd advise to not go this route until you've tried designing with lock escalation in mind. See more here: What is disadvantage of disabling lock escalation in SQL Server 2016