There are situations where InnoDB deadlock would come up when you least expect it. For example, SELECT queries can perform locks on the gen_clust_index, aka the Clustered Index.
Here are three past questions I agressively looked over with @RedBlueThing, the person who asked these questions. @RedBlueThing found work arounds for his questions.
Just to keep your question in perspective, when you look over these answers (don't look too deeply, even I get dizzy looking at my own convoluted answers) it should be quickly apparent that even SELECT queries can lock InnoDB data. Although autocommit could be enabled (rendering each query its own transaction), a single SQL statement can still be victimized by deadlocks.
You also have special cases of SELECT where you can lock specific rows on demand.
Based on InnoDB Deadlocking link, the sequences of event to cause this situation could theoretically be as follows:
- Your SQL UPDATEs a single row but generates an error
- The UPDATE causes a rollback of the one row
- The row has a lingering lock
Personally, that last statement scares me. It would have been nice for MySQL to inform everyone of this quirk instead of just documenting and hopong you google for it. Yet, that statement is from the MySQL Documentation. (Oh yeah, Oracle owns InnoDB)
You will be able to avoid a lot of your race conditions by performing many of your steps in a single statement. By using a TOP()
cluse will be able to set the flag on at most one row. By using the OUTPUT
cluse you can return this to the application automically.
I define a simple test table and populate it:
create table dbo.T(id int, IsProcessing bit default 0);
insert dbo.T(id) values (1),(2),(3),(4);
The output clause needs a table variable to receive the changed values:
declare @U table (id int);
A bit of debug code to make the "before" and "after" states obvious:
select 'Before' as B, * from dbo.T;
And the statement itself:
update top (1) dbo.T
set IsProcessing = 1
OUTPUT INSERTED.id
INTO @U(id)
where IsProcessing = 0;
And the results:
select 'After' as A, * from dbo.T;
select * from @U;
This is the output from the second execution of the above.
B id IsProcessing
Before 1 1
Before 2 0
Before 3 0
Before 4 0
A id IsProcessing
After 1 1
After 2 1
After 3 0
After 4 0
id
2
The row with id=2
had gone from IsProcessing=0
to IsProcessing=1
and that id
is returned in the table variable.
With this trivial data the rows are processed in the sequence they were created. In a more complex environment the optimizer may choose any row that matches the where clause. If you have a requirement to process rows in, say, time sequence further qualification will be required.
I haven't thought it all the way through but I believe his will work at any isolation level with or without explicit transactions.
It is of course possible that there are no rows with IsProcessing=0
at the time the statement runs. In this case the table variable will have zero rows.
To fully isolate each transaction from the other you could try sp_getapplock
. This will add overhead and reduce concurrency. You have to be careful to release the app lock as soon as you can in both success and failure scenarios.
Best Answer
Perhaps it is order of doing things. IMHO a combination of SERIALIZABLE isolation and every other transaction test may be the problem.
For starters, what does SERIALIZABLE mean?
According to MySQL 5.0 Certification Study Guide
page 421 Bulletpoint 4 says
The next paragraph says that with SERIALIZABLE, one transaction cannot modify rows if another has merely read them.
SUGGESTIONS
REPEATABLE READ
isolation level