Sql-server – How to avoid timeouts during big DML operations

best practicesoptimizationperformancequery-performancesql serversql-server-2008

I have a big insert script I need to run. Its about 55,000 records and 160 columns. The script is already created and I can't create it again.

The problem I have is that this runs for about 4 hours or so, and during that time the system that uses this database gets really slow and timeout a lot.

I would not care if my INSERT is slower but it shouldn't impact other users.

I was thinking in doing some batch of let's say 500 rows and use the WAITFOR, but was wondering if there could be a better option for doing this.

Best Answer

Break the operation into chunks of separate transactions.

BEGIN TRANSACTION;
  INSERT ...
  INSERT ...
  -- maybe 1000 or 2500 of these
COMMIT TRANSACTION;

GO
WAITFOR DELAY '00:00:01';
GO

BEGIN TRANSACTION;
  INSERT ...
  INSERT ...
  -- maybe 1000 or 2500 more of these
COMMIT TRANSACTION;

This will prevent all 55,000 inserts from jamming up your log and operating as a single, blocking transaction. I talked about this methodology to some degree in this blog post even though it talks about more tightly controlled DELETE operations.

If you can, have the file(s) regenerated in a format that makes them easier to BULK INSERT or bcp.