I'm working on performance analysis of reports, resulting from different test cases. I'm using MS SQL Server 2016 Dev edition
Regarding one of the performance counters ('Checkpoint Pages/sec'), I understand the meaning of the counter. During a 22 second interval, the graph drops from 400 to 200 written changed pages per second, and then returns to 400.
Result from the graph:
- 1-13 seconds, value is around 400
- 14-16, falls down to 200, then returns to 400
- 16-38, value is around 400
- 39, falls to 200 then returns to 400
- 61, falls to 200 then returns to 400
- 84, falls to 200 then returns to 400
The values in the performance counter 'Disk Writes/s' are identical to 'Checkpoint Pages/sec'.
I expected these counters to be a straight line (without the peaks and valleys).
Can you explain this behavior? What do the drops in Checkpoint Pages/sec from 400 to 200 mean?
Best Answer
Checkpoints, like most things in SQL Server, are a big topic. A good place to start is this Microsoft Docs page: Database Checkpoints (SQL Server)
To respond to your main question though, check out this specific quote from that page:
So "checkpoint pages / sec" will vary based on a lot of factors:
All that to say: seeing peaks and valleys in the number of pages being flushed to disk by checkpoint operations is completely normal.
If the purpose of the performance testing you mentioned is to normalize (as much as possible) the number of pages flushed to disk per second, consider reading the documentation page I linked to above and adjusting those settings.