You'll want to change your code to something like this.
DECLARE @AssetIssueId INT
INSERT INTO [Issue] ([IssueId], [AssetId]) VALUES (?, ?)
SET @AssetIssueID = scope_identity()
INSERT INTO [IssuePropertyValues]
@AssetIssueID, ?, ?
As to the why...
If Java is starting a transaction for you automatically, and the insert of 10k+ rows is taking a few moments, then this thread is taking an IX on the Issue table. Another thread is then taking an IX on the issue table (which is blocked). This thread is then attempting to take an S on the issue table (for the INSERT SELECT FROM statement) which is blocked waiting for the other thread, and we now have a deadlock.
Would that actually cause a table lock on the entire table, even
though i'm only selecting certain rows into a table variable?
Lock escalation. If your 'select' is not ... selective, if it has to scan large portions of a table then the engine may escalate to table locks. See Lock Escalation (Database Engine).
I can see there are 2 Lock Escalations occurring, does that mean row
locks have been escalated to table locks?
Are they lock escalation events, or just lock escalation attempts? If they succeeded then the transaction has locked the entire table.
Can someone give me some tips on how to trace this deadlock down
further?
Approaching this from the individual locks POV is not very productive. The typical investigation focuses on capturing and analyzing the deadlock graph. See Analyzing Deadlocks with SQL Server Profiler. Capture the deadlock graph and post it here and maybe we can help. Upload the deadlock graph XML, not the picture of the deadlock, see How to: Save Deadlock Graphs (SQL Server Profiler).
Deadlock graph:
Spid 58 has page 7:1:11066 in IX mode and is blocking spid 61 that wants it in S mode. Spid 61 has page 7:1:1932345 in S mode and is blocking spid 58 who has an IU mode lock on it but wants it convert it to IX. Things are complicated by the addition of parallelism and a long list of 'me too' waiters, but the fundamental issue can be reduced to the one I described above.
This is a typical pattern of lack of indexes. You have an updater, XWingNew.dbo.UpdatePostsCleanedUriUniqueUri
which is issuing an update that likely scans the entire XWingNew.dbo.Posts
table. At the same time you have several readers likely scanning the entire table too in some linq generated query that was truncated in the XML: SELECT [UnionAll3].[C2] AS [C1], [UnionAll3].[C3] AS [C2], [UnionAll3].[C4] AS [C3], [UnionAll3].[C5] AS [C4],...
. Because both the update and the select issue table scans they are pretty much guaranteed to deadlock when the update attempts to convert the IU page locks it acquired in the scan phase to IX locks needed to update the qualified rows.
You have two alternatives to pursue right now:
- fix your application. The queries are clearly not tuned and there was no thought given to the indexing strategies. You will not only fix the deadlock problem, but your app will also be much faster.
- the easy cop out: enable read committed snapshot. This will not fix the application flaws, but will hide them and eliminate the deadlocks.
Best Answer
Exchange Events provide details and context on the deadlock itself. Specifically the
WaitType
e_waitPipeGetRow is a resource that oneSELECT
statement owns and anotherSELECT
statement is waiting on, ergo being blocked by the firstSELECT
(for example due to resource contention). This StackOverflow answer provides more context on it and deadlocks specifically.I'm not an expert on reading deadlock graph data, but my research seems to indicate that kind of
WaitType
is potentially related to intra-query parallelism deadlocks (as mentioned in the previously linked StackOverflow answer).You may find SentryOne's higher level read on deadlocks helpful in understanding what's going on.