We have huge production database, its size is around 300GB. Is there any approach to improve performance of a delete query? Right now deletion speed is between 1-10k per minute, it is very slow for us.
Sql-server – Improve delete speed for SQL Server
performancequery-performancesql server
Related Solutions
'DETAILED'
implies a full scan of every single page in the index (or heap). Do this for every table and every secondary index, the result means you are doing a full database scan, end to end, and not a very efficient one (ie. not nearly as fast as backup would read it, for instance). The time is driven by:
- how big your database is
- how fast your IO subsytem is to read the entire database
- aditional concurent load competing for the IO throughput
Basically, if all you have is a straw (your IO throughput) it takes 30 minutes to drink a bucket (your database size). Buy faster IO, reduce the size of your data, or use SAMPLED
scans.
That being said... 20Gb is quite small. 30 minutes to read 20Gb is a lot of time. Is you IO subsystem that slow? Did you deploy on 7200 RPM consumer 1TB drives?
As Steve said in the comments make sure instant file initialization is turned on. And Stings answer discussed multiple files and compression but unfortunately those are going to be on the BACKUP
side.
But you can also try modifying the restore command with the BLOCKSIZE
, BUFFERCOUNT
and MAXTRANSFERSIZE
options of the RESTORE command.
Unfortunately the BOL for RESTORE doesn't say much about them so you have to look at the BOL for the BACKUP DATABASE command.
- BLOCKSIZE Specifies the physical block size, in bytes. The supported sizes are 512, 1024, 2048, 4096, 8192, 16384, 32768, and 65536 (64 KB) bytes. The default is 65536 for tape devices and 512 otherwise.
BUFFERCOUNT Specifies the total number of I/O buffers to be used for the backup operation. You can specify any positive integer; however, large numbers of buffers might cause "out of memory" errors because of inadequate virtual address space in the Sqlservr.exe process. The total space used by the buffers is determined by: buffercount * maxtransfersize.
MAXTRANSFERSIZE Specifies the largest unit of transfer in bytes to be used between SQL Server and the backup media. The possible values are multiples of 65536 bytes (64 KB) ranging up to 4194304 bytes (4 MB).
This post had a nice analogy and warning for MAXTRANSFERSIZE
and BUFFERCOUNT
This can be a tricky option to use, specifying to many buffers can lead to “Out of memory” errors! Always use this option with care! The total memory used by the recovery process is MaxTransferSize x BufferCount = Memory needed by restore if you do not have the needed amount of memory on your server you will get errors!
You can think of the SQL Server restore process as using buckets to put out a fire.
The BufferCount sets the number of buckets to use to put out the fire while the MaxTransferSize sets how full these buckets should be. As you can imagine using many buckets and only filling them half can impact the time it takes you to put out the fire, just like using only a few buckets but filling them all the way to the top can have a negative impact if the buckets are too heavy to lift. Setting the MaxTransferSize and BufferCount options can be different for every environment since you need to optimize the values to match your storage and memory configuration. So play around with different values until you get your optimized restore time!
As he said at the end the use of these will be different per server so you will have to play with them somewhat to get the optimal restore speed.
And here is a great question that goes over using them in a fair amount of detail. It is talking about BACKUPS
but again, the options are the same.
Edit I've been looking for this all day. Nic Cain has a script that will try a backup out over and over again with variations on the settings to help you find the best results. Here is the link. You should easily be able to modify this script to rest RESTOREs instead.
Best Answer
If you are trying to delete a large number of rows in a single statement, then it is likely you are waiting on log activity. So you can:
TRUNCATE
orDROP
/CREATE
.SELECT INTO
to put the data you want to keep into another table, thenTRUNCATE
, then move the small portion back. (Or just drop the old table, rename the new, and re-apply constraints / permissions etc.)CHECKPOINT
to clear the log instead of take log backups, but you need to be sure to set it back and to take a new full backup to re-initiate the log chain.