We are cleaning up the last year's transactions from our primary database. We use public database link to delete the records a day by day from another database. Each days records are around 30-40k, but somedays transaction peak to 1 million records. When the delete procedure faces such days, it locks the primary database and causing lock. What would be the ideal solution for clearing the records? We also tried to use alter table truncate partition
solution. However, the lock situation is worse in this one. As we clear the database, the new records are being written to the database in real-time. So I cannot restart or increase the distributed_lock_timeout
parameter. Any solution is appreciated.
Oracle – Solution for ORA-02049 Timeout: Distributed Transaction Waiting for Lock
deadlocklockingoracleoracle-11goracle-11g-r2
Related Solutions
NEW ANSWER (MySQL-style dynamic SQL): Ok, this one tackles the problem in the way one of the other poster's described - reversing the order in which mutually incompatible exclusive locks are acquired so that regardless of how many occur, they occur only for the least amount of time at the end of transaction execution.
This is accomplished by separating the read part of the statement into it's own select statement and dynamically generating a delete statement that will be forced to run last due to order of statement appearance, and which will affect only the proc_warnings table.
A demo is available at sql fiddle:
This link shows the schema w/ sample data, and a simple query for rows that match on ivehicle_id=2
. 2 rows result, as none of them have been deleted.
This link shows the same schema, sample data, but pass a value 2 to the DeleteEntries stored program, telling the SP to delete proc_warnings
entries for ivehicle_id=2
. The simple query for rows returns no results as they've all been successfully deleted.
The demo links only demostrate that the code works as intended to delete. The user with the proper test environment can comment on whether this solves the problem of the blocked thread.
Here is the code as well for convenience:
CREATE PROCEDURE DeleteEntries (input_vid INT)
BEGIN
SELECT @idstring:= '';
SELECT @idnum:= 0;
SELECT @del_stmt:= '';
SELECT @idnum:= @idnum+1 idnum_col, @idstring:= CONCAT(@idstring, CASE WHEN CHARACTER_LENGTH(@idstring) > 0 THEN ',' ELSE '' END, CAST(id AS CHAR(10))) idstring_col
FROM proc_warnings
WHERE EXISTS (
SELECT 0
FROM day_position
WHERE day_position.transaction_id = proc_warnings.transaction_id
AND day_position.dirty_data = 1
AND EXISTS (
SELECT 0
FROM ivehicle_days
WHERE ivehicle_days.id = day_position.ivehicle_day_id
AND ivehicle_days.ivehicle_id = input_vid
)
)
ORDER BY idnum_col DESC
LIMIT 1;
IF (@idnum > 0) THEN
SELECT @del_stmt:= CONCAT('DELETE FROM proc_warnings WHERE id IN (', @idstring, ');');
PREPARE del_stmt_hndl FROM @del_stmt;
EXECUTE del_stmt_hndl;
DEALLOCATE PREPARE del_stmt_hndl;
END IF;
END;
This is the syntax to call the program from within a transaction:
CALL DeleteEntries(2);
ORIGINAL ANSWER (still think it's not too shabby) Looks like 2 issues: 1) slow query 2) unexpected locking behavior
As regards issue #1, slow queries are often resolved by the same two techniques in tandem query statement simplification and useful additions of or modifications to indexes. You yourself already made the connection to indexes - without them the optimizer cannot search for a limited set of rows to process, and each row from each table multiplying per extra row scanned the amount of extra work which must be done.
REVISED AFTER SEEING POST OF SCHEMA AND INDEXES: But I imagine you'll get the most performance benefit for your query by making sure you have a good index configuration. To do so, you can go for better delete performance, and possibly even better delete performance, with trade off of larger indexes and perhaps noticeably slower insert performance on the same tables to which additional index structure is added.
SOMEWHAT BETTER:
CREATE TABLE `day_position` (
...,
KEY `day_position__id_rvrsd` (`dirty_data`, `ivehicle_day_id`)
) ;
CREATE TABLE `ivehicle_days` (
...,
KEY `ivehicle_days__vid_no_sort_index` (`ivehicle_id`)
);
REVISED HERE TOO: Since it takes as long as it does to run, I'd leave the dirty_data in the index, and I got it wrong too for sure when I placed it after the ivehicle_day_id in index order - it should be first.
But if I had my hands on it, at this point, since there must be a good amount of data to make it take that long, I'd would just go for all covering indexes just to make sure I was getting the best indexing that my troubleshooting time could buy, if nothing else to rule that part of the problem out.
BEST/COVERING INDEXES:
CREATE TABLE `day_position` (
...,
KEY `day_position__id_rvrsd_trnsid_cvrng` (`dirty_data`, `ivehicle_day_id`, `transaction_id`)
) ;
CREATE TABLE `ivehicle_days` (
...,
UNIQUE KEY `ivehicle_days__vid_id_cvrng` (ivehicle_id, id)
);
CREATE TABLE `proc_warnings` (
.., /*rename primary key*/
CONSTRAINT pk_proc_warnings PRIMARY KEY (id),
UNIQUE KEY `proc_warnings__transaction_id_id_cvrng` (`transaction_id`, `id`)
);
There are two performance optimization goals sought by the last two change suggestions:
1) If the search keys for successively accessed tables are not the same as the clustered key results returned for the currently accessed table, we eliminate what would have been a need to make a second set of index-seek-with-scan operations on the clustered index
2) If the latter is not the case, there is still at least the possibility that the optimizer can select a more efficient join algorithm since the indexes will be keeping the required join keys in sorted order.
Your query seems about as simplified as it can be (copied here in case it is edited later):
DELETE pw
FROM proc_warnings pw
INNER JOIN day_position dp
ON dp.transaction_id = pw.transaction_id
INNER JOIN ivehicle_days vd
ON vd.id = dp.ivehicle_day_id
WHERE vd.ivehicle_id=2 AND dp.dirty_data=1;
Unless of course there's something about written join order that affects the way the query optimizer proceeds in which case you could try some of the rewrite suggestions others have provided, including perhaps this one w/ index hints (optional):
DELETE FROM proc_warnings
FORCE INDEX (`proc_warnings__transaction_id_id_cvrng`, `pk_proc_warnings`)
WHERE EXISTS (
SELECT 0
FROM day_position
FORCE INDEX (`day_position__id_rvrsd_trnsid_cvrng`)
WHERE day_position.transaction_id = proc_warnings.transaction_id
AND day_position.dirty_data = 1
AND EXISTS (
SELECT 0
FROM ivehicle_days
FORCE INDEX (`ivehicle_days__vid_id_cvrng`)
WHERE ivehicle_days.id = day_position.ivehicle_day_id
AND ivehicle_days.ivehicle_id = ?
)
);
As regards #2, unexpected locking behavior.
As I can see both queries wants an exclusive X lock on a row with primary key = 53. However, neither of them must delete rows from proc_warnings table. I just don't understand why the index is locked.
I guess it would be the index that's locked because the row of data to be locked is in a clustered index, i.e. the single row of data itself resides in the index.
It would be locked, because:
1) according to http://dev.mysql.com/doc/refman/5.1/en/innodb-locks-set.html
...a DELETE generally set record locks on every index record that is scanned in the processing of the SQL statement. It does not matter whether there are WHERE conditions in the statement that would exclude the row. InnoDB does not remember the exact WHERE condition, but only knows which index ranges were scanned.
You also mentioned above:
...as for me the main feature of READ COMMITTED is how it deals with locks. It should release the index locks of non-matching rows, but it doesn't.
and provided the following reference for that:
http://dev.mysql.com/doc/refman/5.1/en/set-transaction.html#isolevel_read-committed
Which states the same as you, except that according to that same reference there is a condition upon which a lock shall be released:
Also, record locks for nonmatching rows are released after MySQL has evaluated the WHERE condition.
Which is reiterated as well at this manual page http://dev.mysql.com/doc/refman/5.1/en/innodb-record-level-locks.html
There are also other effects of using the READ COMMITTED isolation level or enabling innodb_locks_unsafe_for_binlog: Record locks for nonmatching rows are released after MySQL has evaluated the WHERE condition.
So, we're told that the WHERE condition must be evaluated before the lock can be relased. Unfortunately we're not told when the WHERE condition is evaluated and it would probably something subject to change from one plan to another created by the optimizer. But it does tell us that lock release, is dependent somehow on performance of query execution, optimization of which as we discuss above is dependent on careful writing of the statement, and judicious use of indexes. It can also be improved by better table design but that would probably be left best to a separate question.
Moreover, the index is not locked either when proc_warnings table is empty
The database can't lock records within the index if there are none.
Moreover, the index is not locked when...the day_position table contains fewer number of rows (i.e. one hundred rows).
This could mean numerous things such as but probably not limited to: a different execution plan due to a change in statistics, a too-brief-to-be-observed-lock due to a much faster execution due to a much smaller data set/join operation.
How do you define the "tablespace size"? Are you interested in the total size of the data files on disk that comprise the tablespace? Or are you interested in the total size of all the segments that are part of the tablespace?
Issuing a DELETE
will not affect the size of the table's segment so it will have no impact on the size of the tablespace under either definition. Both the size of the table's segment and the size of the tablespace's data files will remain constant. Of course, there will now be additional free space in many of the table's blocks that can be used by subsequent INSERT
and UPDATE
operations.
Issuing a TRUNCATE
, on the other hand, will decrease the size of the table's segment. That won't affect the size of the tablespace's data files. But it will affect the total size of all the segments that are part of the tablespace. So there may be a difference depending on your definition of the size of a tablespace. A TRUNCATE
, being DDL, will not be transactional so it cannot be rolled back. Assuming that you are deleting a large fraction of the rows in the table, it will also tend to be much more efficient than issuing a DELETE
because it generates much less UNDO
and REDO
.
If you are stating in your last paragraph that the size of the segment is increasing much faster than the rate at which new data is being added, assuming that the new rows are roughly the same size as the old rows and that the old rows are not growing over time due to updates, is it possible that the new rows are being added via direct-path inserts which will always go above the current high-water mark for the segment and will thus never reuse the space in blocks that is freed up by a DELETE
? If so, is that intentional? If the table is small, you might see similar differences because of the granularity of extent allocation-- you might insert 100 rows without requiring a new extent, the 101st insert requires Oracle to allocate a new extent, and that new extent might be sufficient for thousands of new rows to be added, but you'll only see the size of the segment change after the 101st insert. But that is less likely if this is a reasonably large table unless you've chosen a particularly large extent size.
Related Question
- Sql-server – In Sql Server, is there a way to check if a selected group of rows are locked or not
- Oracle Ref Partitioning: Deadlock due to child table row migration
- SQL Server – Handling Race Condition in DELETE with Nested Query
- MySQL – How to Prevent Primary Key Lock in SELECT to Avoid Deadlock
- PostgreSQL – Synchronizing Batch Delete with Advisory Lock
- Mysql – “Lock wait timeout exceeded; try restarting transaction” for the delete query
Best Answer
First of all do not use database links for larger transactions. This can cause too many problems with blocking sessions. Another hint is to keep transactions in a good size (not to small, not to big).
If your table is not partitioned then write a piece of code to remove just 1000 rows, commit and delete the next 1000 rows.
You said something about
alter table <table_name> truncate partition <partition_name>;
Is this table partitioned?To archive an entire partition you best exchange it with an empty table of the same structure. Afterwards you can export/backup the table and then drop it. If your partition is in a dedicated tablespace you can also mark it as read only and skip it in the daily backup (enable backup optimization in RMAN).