Sql-server – Why doesn’t time for a simple update scale linearly with the number of records

performancesql server

I am running some performance tests and I started with a dataset of 900.000 records the update took 10 seconds or about 90000 records per second. Then I increased the number of records to 9.000.000 and the update took 170 seconds or about 52941 records updated per second.

Why does the number of records updated per seconds drop when the total number of records increases?

The distribution in the data is the same, the larger set is the smaller set copied 10 times. The size of the data and log files did indeed increase since I copied the data 10 times. I did shrink the log file after that. There is no other workload on the database server at this moment.

The stored procedure doing the update is the following:

CREATE PROCEDURE updatePoints      
    @ids NVARCHAR(MAX)    
AS 
    DECLARE @points1 INT
    DECLARE @points2 INT

    SET @points1 = 1
    SET @points2 = 2

    UPDATE p
    SET points = (0 + CASE WHEN p.[x] = m.[x] AND p.[y] = m.[y]
                              THEN @points1
                              ELSE 0
                      END + 
                      CASE WHEN p.[u] = m.[u] AND p.[v] = m.[v]
                              THEN @points2
                              ELSE 0
                      END)
    FROM p 
    JOIN  m ON m.id = p.mid
    JOIN platform.Split(@ids) i ON i.Value = m.id
    WHERE m.[status] = 1 

Best Answer

If you post the execution plans there may be obvious differences that we can highlight. Other than plans it could be:

  • Different distribution of values in the dataset, leading to more frequent page splits
  • Data and or log file growth
  • A sort or hash operation spilling to tempdb
  • Contention with other workloads

Need more information (queries, sample data, schema, plans) for better answers.