Sql-server – Is Query Performance different for different versions of SQL Server

sql server

I have fired 3 update queries in my stored procedure for 3 different tables. Each table contains almost 2,00,000 records and all records have to be updated. I am using indexing to speed up the performance. It is quite working well with SQL Server 2008. A stored procedure takes only 12 to 15 minutes to execute. (updates almost 1000 rows in 1 second in all three tables)

But when I run the same scenario with SQL Server 2008 R2 then the stored procedure takes more time to complete execution. It's about 55 to 60 minutes. (updates almost 100 rows in 1 second in all three tables). I couldn't find any reason or solution for that.

I have also tested same scenario with SQL Server 2012, but the result is same as above.

Here is my 3 table update query in stored procedure.

if (select COUNT(*) from table where conditions)>0

begin

    update table1
    set Coulmnname= @ColumnName
    where Conditions

    update table2
    set Coulmnname= @ColumnName
    where Conditions

    update table3
    set Coulmnname= @ColumnName
    where Conditions

end 

execution plan
enter image description here
Image 1
enter image description here
Image 2

Best Answer

Yes it will be different. Every version will have tweaks to the optimiser, or code path, or caching algorithms or something. I would expect the later versions to run faster than the earlier ones.

I am using indexing to speed up the performance.

No, you're not. If every row in the table has to be updated then an index is, at best, irrelevant. If the index contains columns which you are updating then the index's pages also have to be updated incurring additional write overhead.

Have you checked the query plan for the three different instances? My guess is the 2008R2 has less memory available and the optimiser's choosing to spill to TempDB, or the operating system's forcing paging to the virtual memory pagefile. Or it may be that the transaction log files for the 8R2 instance started out small and what you're seeing is an artifact of autogrowth to accommodaate six million log records.

I would always recommend batching to my devs when processing this many records. Update the first N records in cluster key sequence. Then do the next N, and so forth until all 2M are done. "N" can be altered to give the best rows/minute on your particular installation, though the tens of thousands tends to be about right. In this way log files remain smaller, transactional locking is reduced and it is restartable in the case of a failure.