We have a large application running, that requires cleanup for performance reasons. This was however not forseen when designing the application.
Basically, the delete is executed from a stored procedure, which first does a couple of selects to define the data to be deleted, after which it starts deleting from different tables. As the link between this data is essential for the deletion, it has to be avoided to delete for example and order without deleting some dependencies. Therefore it has to run in one transaction.
Problem is, whenever the script is running, the application itself become not unusable: timeouts when getting data from the web, or trying to update a certain record. Al those queries are blocked by the sessions that runs the transaction
The data being deleted is not relevant anymore, and should thus not be updated by the application.
I've tried running the transaction in different isolation levels, including snapshot, but it still doesn't work.
How can I avoid these locks? Should I use READ_COMMITTED_SNAPSHOT?
Thanks in advance…
Best Answer
I don't think you have to force all of your deletes to occur within a single, monolithic transaction. Instead of having a transaction that does:
Why not delete in chunks? You can play around with the
TOP (?)
parameter based on how many rows lead to what kind of duration of transaction (there is no magic formula for this, even if we did have a lot more information about your schema). Pseudo-code:This may extend the total amount of time that the operation takes (especially if you backup or checkpoint on each loop, or add an artificial delay using
WAITFOR
, or both), but should allow other transactions to sneak in between chunks, waiting for shorter transactions instead of the whole process.I wrote a lot more about this technique here.