In repeatable reads, there is always row-level locking imposed via the gen_clust_index (aka the Clustered Index). This is the beauty of Transactions. What is even more interesting is that InnoDB has four transaction isolation levels, not just one:
There are four values for tx_isolation you can set:
In your particular case, inserting data into TableA actually does not get written to disk. The necessary changes are recorded in three(3) distinct places:
- log buffer in memory
- in ibdata1
- undo tablespace
- rollback segments
- double write buffer
- redo log info in either ib_logfile0 or ib_logfile1
The same applies with the delete in step 5.
Executing a rollback will undo the delete and then undo the inserts.
You must remember something very important: If you want to rollback multiple SQL commands, you must begin like this:
SET autocommit = 0;
START TRANSACTION;
Transaction begins
Read data from Table A
If (Table A has Any Data) End Transaction (via ROLLBACK) and exit
If Table A has No Data, Proceed further
Delete a record in Table B
Transaction ends
COMMIT;
Give it a Try !!!
When everyone is using repeatable reads
- your INSERTs are only seen by you
- someone else's DELETEs are only seen by the other person
CAVEAT : Table level locking is never implicit for InnoDB. If you want to lock a table, you must issue, LOCK TABLE
explicitly.
NEW ANSWER (MySQL-style dynamic SQL):
Ok, this one tackles the problem in the way one of the other poster's described - reversing the order in which mutually incompatible exclusive locks are acquired so that regardless of how many occur, they occur only for the least amount of time at the end of transaction execution.
This is accomplished by separating the read part of the statement into it's own select statement and dynamically generating a delete statement that will be forced to run last due to order of statement appearance, and which will affect only the proc_warnings table.
A demo is available at sql fiddle:
This link shows the schema w/ sample data, and a simple query for rows that match on ivehicle_id=2
. 2 rows result, as none of them have been deleted.
This link shows the same schema, sample data, but pass a value 2 to the DeleteEntries stored program, telling the SP to delete proc_warnings
entries for ivehicle_id=2
. The simple query for rows returns no results as they've all been successfully deleted.
The demo links only demostrate that the code works as intended to delete. The user with the proper test environment can comment on whether this solves the problem of the blocked thread.
Here is the code as well for convenience:
CREATE PROCEDURE DeleteEntries (input_vid INT)
BEGIN
SELECT @idstring:= '';
SELECT @idnum:= 0;
SELECT @del_stmt:= '';
SELECT @idnum:= @idnum+1 idnum_col, @idstring:= CONCAT(@idstring, CASE WHEN CHARACTER_LENGTH(@idstring) > 0 THEN ',' ELSE '' END, CAST(id AS CHAR(10))) idstring_col
FROM proc_warnings
WHERE EXISTS (
SELECT 0
FROM day_position
WHERE day_position.transaction_id = proc_warnings.transaction_id
AND day_position.dirty_data = 1
AND EXISTS (
SELECT 0
FROM ivehicle_days
WHERE ivehicle_days.id = day_position.ivehicle_day_id
AND ivehicle_days.ivehicle_id = input_vid
)
)
ORDER BY idnum_col DESC
LIMIT 1;
IF (@idnum > 0) THEN
SELECT @del_stmt:= CONCAT('DELETE FROM proc_warnings WHERE id IN (', @idstring, ');');
PREPARE del_stmt_hndl FROM @del_stmt;
EXECUTE del_stmt_hndl;
DEALLOCATE PREPARE del_stmt_hndl;
END IF;
END;
This is the syntax to call the program from within a transaction:
CALL DeleteEntries(2);
ORIGINAL ANSWER (still think it's not too shabby)
Looks like 2 issues: 1) slow query 2) unexpected locking behavior
As regards issue #1, slow queries are often resolved by the same two techniques in tandem query statement simplification and useful additions of or modifications to indexes. You yourself already made the connection to indexes - without them the optimizer cannot search for a limited set of rows to process, and each row from each table multiplying per extra row scanned the amount of extra work which must be done.
REVISED AFTER SEEING POST OF SCHEMA AND INDEXES:
But I imagine you'll get the most performance benefit for your query by making sure you have a good index configuration. To do so, you can go for better delete performance, and possibly even better delete performance, with trade off of larger indexes and perhaps noticeably slower insert performance on the same tables to which additional index structure is added.
SOMEWHAT BETTER:
CREATE TABLE `day_position` (
...,
KEY `day_position__id_rvrsd` (`dirty_data`, `ivehicle_day_id`)
) ;
CREATE TABLE `ivehicle_days` (
...,
KEY `ivehicle_days__vid_no_sort_index` (`ivehicle_id`)
);
REVISED HERE TOO: Since it takes as long as it does to run, I'd leave the dirty_data in the index, and I got it wrong too for sure when I placed it after the ivehicle_day_id in index order - it should be first.
But if I had my hands on it, at this point, since there must be a good amount of data to make it take that long, I'd would just go for all covering indexes just to make sure I was getting the best indexing that my troubleshooting time could buy, if nothing else to rule that part of the problem out.
BEST/COVERING INDEXES:
CREATE TABLE `day_position` (
...,
KEY `day_position__id_rvrsd_trnsid_cvrng` (`dirty_data`, `ivehicle_day_id`, `transaction_id`)
) ;
CREATE TABLE `ivehicle_days` (
...,
UNIQUE KEY `ivehicle_days__vid_id_cvrng` (ivehicle_id, id)
);
CREATE TABLE `proc_warnings` (
.., /*rename primary key*/
CONSTRAINT pk_proc_warnings PRIMARY KEY (id),
UNIQUE KEY `proc_warnings__transaction_id_id_cvrng` (`transaction_id`, `id`)
);
There are two performance optimization goals sought by the last two change suggestions:
1) If the search keys for successively accessed tables are not the same as the clustered key results returned for the currently accessed table, we eliminate what would have been a need to make a second set of index-seek-with-scan operations on the clustered index
2) If the latter is not the case, there is still at least the possibility that the optimizer can select a more efficient join algorithm since the indexes will be keeping the required join keys in sorted order.
Your query seems about as simplified as it can be (copied here in case it is edited later):
DELETE pw
FROM proc_warnings pw
INNER JOIN day_position dp
ON dp.transaction_id = pw.transaction_id
INNER JOIN ivehicle_days vd
ON vd.id = dp.ivehicle_day_id
WHERE vd.ivehicle_id=2 AND dp.dirty_data=1;
Unless of course there's something about written join order that affects the way the query optimizer proceeds in which case you could try some of the rewrite suggestions others have provided, including perhaps this one w/ index hints (optional):
DELETE FROM proc_warnings
FORCE INDEX (`proc_warnings__transaction_id_id_cvrng`, `pk_proc_warnings`)
WHERE EXISTS (
SELECT 0
FROM day_position
FORCE INDEX (`day_position__id_rvrsd_trnsid_cvrng`)
WHERE day_position.transaction_id = proc_warnings.transaction_id
AND day_position.dirty_data = 1
AND EXISTS (
SELECT 0
FROM ivehicle_days
FORCE INDEX (`ivehicle_days__vid_id_cvrng`)
WHERE ivehicle_days.id = day_position.ivehicle_day_id
AND ivehicle_days.ivehicle_id = ?
)
);
As regards #2, unexpected locking behavior.
As I can see both queries wants an exclusive X lock on a row with
primary key = 53. However, neither of them must delete rows from
proc_warnings table. I just don't understand why the index is locked.
I guess it would be the index that's locked because the row of data to be locked is in a clustered index, i.e. the single row of data itself resides in the index.
It would be locked, because:
1) according to http://dev.mysql.com/doc/refman/5.1/en/innodb-locks-set.html
...a DELETE generally set record locks on every index record that is
scanned in the processing of the SQL statement. It does not matter
whether there are WHERE conditions in the statement that would exclude
the row. InnoDB does not remember the exact WHERE condition, but only
knows which index ranges were scanned.
You also mentioned above:
...as for me the main feature of READ COMMITTED is how it deals with
locks. It should release the index locks of non-matching rows, but it
doesn't.
and provided the following reference for that:
http://dev.mysql.com/doc/refman/5.1/en/set-transaction.html#isolevel_read-committed
Which states the same as you, except that according to that same reference there is a condition upon which a lock shall be released:
Also, record locks for nonmatching rows are released after MySQL has
evaluated the WHERE condition.
Which is reiterated as well at this manual page http://dev.mysql.com/doc/refman/5.1/en/innodb-record-level-locks.html
There are also other effects of using the READ COMMITTED isolation
level or enabling innodb_locks_unsafe_for_binlog: Record locks for
nonmatching rows are released after MySQL has evaluated the WHERE
condition.
So, we're told that the WHERE condition must be evaluated before the lock can be relased.
Unfortunately we're not told when the WHERE condition is evaluated and it would probably something subject to change from one plan to another created by the optimizer. But it does tell us that lock release, is dependent somehow on performance of query execution, optimization of which as we discuss above is dependent on careful writing of the statement, and judicious use of indexes. It can also be improved by better table design but that would probably be left best to a separate question.
Moreover, the index is not locked either when proc_warnings table is
empty
The database can't lock records within the index if there are none.
Moreover, the index is not locked when...the day_position table
contains fewer number of rows (i.e. one hundred rows).
This could mean numerous things such as but probably not limited to: a different execution plan due to a change in statistics, a too-brief-to-be-observed-lock due to a much faster execution due to a much smaller data set/join operation.
Best Answer
MySQL is taking too much of the
innodb buffer pool
size for your table locking.Insert
innodb_buffer_pool_size=2g
inmy.cnf
file, which is most probably in/etc/mysql/