Deleted my previous answer when I realised the trace shows parallelism.
With a big warning to test this very very thoroughly, you might alleviate the deadlocks by restricting MAXDOP and adding a UPDLOCK hint on Table1. I would also (as per @Aaron's suggestion) try EXISTS.
DELETE
d
FROM
Table1 d WITH (UPDLOCK)
INNER JOIN
#deleteEntities de
ON de.id = oeh.EntityId
WHERE EXISTS
(
SELECT
NULL
FROM
dbo.Table2 orc
INNER JOIN
dbo.Table3 orr
ON orr.id = orc.Table3Id
INNER JOIN
dbo.Table4 oeh
ON oeh.id = orr.Table4Id
WHERE
oeh.id = de.Table2Id
) OPTION (MAXDOP 1, RECOMPILE)
This would be the sledgehammer approach. It's likely you could eliminate the parallelism with appropriate indexing but can't advise on that unless we see an execution plan and or statistics.
@deleteEntities in the deadlock trace is a little "suspicious". You're passing in a single identifer but have this temporary table in there?. Optimiser is likely to be producing an execution plan that estimates 1 for this, so if it contains a variable number of rows I'd switch to temporary table and force recompile (as above).
No.
At least in my testing both of them have identical execution plans and identical locking behaviour. Both place an IX
lock on tableA
and a schema stability lock on TableB
straight away and then follow the same behaviour in doing the clustered index scan on TableA
taking IU
locks on the page, U
locks on the key
, then in the event of a match converting the page lock to IX
then the key lock to X
prior to deleting the row. This is exactly the same pattern as if the NOLOCK
hint on tableA
is removed entirely so it is pointless in this case.
My original results had some differences between the two queries in that the Join version has a series of apparently aquiring and immediately releasing schema stability locks on TableA
that didn't show up in the EXISTS
version. That seems to be related to whether the "Include Actual Execution Plan" option is enabled in SSMS. Today if I have this option enabled this series appears in the locking info for both of them so not sure why originally it only appeared in one. Maybe some timing issue.
You can see this locking information using TF1200
as below (with "Include Actual Execution Plan" option disabled).
Code
SET NOCOUNT ON;
CREATE TABLE tableA(
colA INT,
colB INT,
PRIMARY KEY(colA,colB));
CREATE TABLE tableB(
colA INT,
colB INT,
PRIMARY KEY(colA,colB));
SELECT OBJECT_ID('tableA') AS tableA,
OBJECT_ID('tableB') AS tableB;
INSERT INTO tableA VALUES (0,0),(1,1),(2,2),(3,3);
INSERT INTO tableB VALUES (1,1),(2,2),(4,4),(5,5),(6,6),(7,7);
SELECT *, %%lockres%% AS lockres
FROM tableA
DECLARE @JoinSQLNoLockTableA nvarchar(max) = N'
DELETE tableA
FROM tableA a WITH(NOLOCK)
JOIN tableB b WITH(NOLOCK)
ON
b.colA = a.colA
AND b.colB = a.colB;'
DECLARE @ExistsSQL nvarchar(max) = N'
DELETE FROM tableA
WHERE EXISTS
(
SELECT *
FROM tableB b WITH(NOLOCK)
where
b.colA = tableA.colA
AND b.colB = tableA.colB
);'
DECLARE @JoinSQLWithoutNoLockTableA nvarchar(max) = REPLACE(@JoinSQLNoLockTableA,'tableA a WITH(NOLOCK)', 'tableA a')
/*Run the commands first with the trace flag off so the locking
info is less full of irrelevant stuff about plan compilation */
EXEC (@JoinSQLNoLockTableA);
INSERT INTO tableA VALUES (1,1),(2,2);
EXEC (@ExistsSQL);
INSERT INTO tableA VALUES (1,1),(2,2);
EXEC (@JoinSQLWithoutNoLockTableA);
INSERT INTO tableA VALUES (1,1),(2,2);
/*Run with TF1200 on*/
DBCC TRACEON(-1,3604,1200) WITH NO_INFOMSGS;
PRINT '@JoinSQLNoLockTableA - Start';
EXEC (@JoinSQLNoLockTableA);
PRINT '@JoinSQLNoLockTableA - End';
DBCC TRACEOFF(-1,3604,1200) WITH NO_INFOMSGS;
INSERT INTO tableA VALUES (1,1),(2,2);
DBCC TRACEON(-1,3604,1200) WITH NO_INFOMSGS;
PRINT '@ExistsSQL - Start';
EXEC (@ExistsSQL);
PRINT '@ExistsSQL - End';
DBCC TRACEOFF(-1,3604,1200) WITH NO_INFOMSGS;
INSERT INTO tableA VALUES (1,1),(2,2);
DBCC TRACEON(-1,3604,1200) WITH NO_INFOMSGS;
PRINT '@JoinSQLWithoutNoLockTableA - Start';
EXEC (@JoinSQLWithoutNoLockTableA);
PRINT '@JoinSQLWithoutNoLockTableA - End';
DBCC TRACEOFF(-1,3604,1200) WITH NO_INFOMSGS;
DROP TABLE tableA,
tableB;
ObjectIds
tableA tableB
----------- -----------
308196148 372196376
LockRes
colA colB lockres
----------- ----------- ------------------------------
0 0 (00009620dd9a)
1 1 (02006d47cbee)
2 2 (040060eff172)
3 3 (06009b88e706)
Output (All three queries)
Process 52 acquiring Sch-S lock on OBJECT: 23:372196376:0 (class bit0 ref1) result: OK
Process 52 acquiring IX lock on OBJECT: 23:308196148:0 (class bit2000000 ref1) result: OK
Process 52 acquiring IU lock on PAGE: 23:1:14144 (class bit0 ref1) result: OK
Process 52 acquiring U lock on KEY: 23:72057594051231744 (00009620dd9a) (class bit0 ref1) result: OK
Process 52 releasing lock on KEY: 23:72057594051231744 (00009620dd9a)
Process 52 acquiring U lock on KEY: 23:72057594051231744 (02006d47cbee) (class bit0 ref1) result: OK
Process 52 acquiring IX lock on PAGE: 23:1:14144 (class bit2000000 ref0) result: OK
Process 52 acquiring X lock on KEY: 23:72057594051231744 (02006d47cbee) (class bit2000000 ref0) result: OK
Process 52 releasing lock reference on KEY: 23:72057594051231744 (02006d47cbee)
Process 52 acquiring U lock on KEY: 23:72057594051231744 (040060eff172) (class bit0 ref1) result: OK
Process 52 acquiring X lock on KEY: 23:72057594051231744 (040060eff172) (class bit2000000 ref0) result: OK
Process 52 releasing lock reference on KEY: 23:72057594051231744 (040060eff172)
Process 52 acquiring U lock on KEY: 23:72057594051231744 (06009b88e706) (class bit0 ref1) result: OK
Process 52 releasing lock on KEY: 23:72057594051231744 (06009b88e706)
Process 52 releasing lock reference on PAGE: 23:1:14144
Process 52 releasing lock on OBJECT: 23:372196376:0
Best Answer
No, T-SQL statements always execute sequentially in SQL Server. The point is the
SELECT
may read uncommitted changes made by other concurrently-executing transactions.Yes, but only because error 601 is only possible when the transaction isolation level is
READ UNCOMMITTED
. Moving to a different isolation level prevents that particular error from occurring.Additional information
Error 601 can occur for a variety of reasons, but all share a common theme: the SQL Server engine was following some pointer chain or other, when it encountered a situation where a structure it was expecting to be present had been moved or deleted by another concurrently-executing process.
The number of cases where error 601 can occur has been progressively reduced over SQL Server releases, with SQL Server 2012 the least likely to return this error, though it is still possible.
My own view is that all error 601 incidents are bugs - for a suitable value of 'bug'. The behaviour is certainly undesirable, and goes well beyond the SQL standard's description of phenomena that may be encountered under the
READ UNCOMMITTED
isolation level.While it is certainly true that the SQL standard does not define the detailed behaviours of the different isolation levels very well, what it does say leads to the common belief that the only consequence of
READ UNCOMMITTED
is that a transaction may see data that has been changed by another transaction before that other transaction commits. As a result of that, usingREAD UNCOMMITTED
isolation level is often justified on the basis that almost all transactions commit very quickly (and very few roll back) so reading 'dirty' data is just a timing difference.Unfortunately, the implementation of
READ UNCOMMITTED
in SQL Server goes much further than simple dirty reads. AREAD UNCOMMITTED
transaction in SQL Server can return duplicated data, a partial read of a large data type, skip data records entirely, or simply fail with an 601 error.It is possible to experience some of these behaviours under
READ COMMITTED
and evenREPEATABLE READ
. This leads some to conclude that the only acceptable isolation levels are those that provide at least statement-level consistency, namelyrow-versioning READ COMMITTED
,SNAPSHOT
isolation, orSERIALIZABLE
.Of these,
row-versioning READ COMMITTED
is usually the easiest to transition to. See this Books Online Topic and its subtree for details of row-versioning isolation levels.