Sql-server – Load Testing When SQL Server Caching Is Disabled

sql server

I was doing a load test by sending the same web request 5000 times one after the other. Here are the total data transfer amounts I observed:

From SQL Server to SAN: 6 MB
From SAN back to SQL Server: 1.5 MB
From SQL Server back to IIS Server: 71 MB

And here are the mdf and ldf file size changes in full recovery mode:

mdf size increase: 1 MB
ldf size increase: 4 MB

As you can notice, despite the 71 MB returned to IIS Server, the total data from SAN to SQL Server was 1.5 MB. After doing some research, I figured that this was due to the SQL Server caching data and not hitting the disk for each request.

So, in order to disable this behavior, I used CHECKPOINT; DBCC DROPCLEANBUFFERS; after each request. Here are the results after this change:

From SQL Server to SAN: 390 MB
From SAN back to SQL Server: 11 GB
From SQL Server back to IIS Server: 71 MB

mdf size increase: 1 MB
ldf size increase: 30 MB

My questions are:

1. What is the reason that data transfer from SQL Server to SAN increased from 6 MB to 390 MB? Shouldn't disabling caching have only affected reading from the disk, not writing to it?

2. Data from SAN to SQL Server became 11 GB. Is this a better indicator of the production environment rather than the 1.5 MB?

3. Data from SAN to SQL Server became 11 GB. And I wonder why SQL Server needs to fetch that much data from disk even though it is only returning 71 MB back to the IIS Server. Since I am using appropriate indexes, I assume SQL Server shouldn't be caching a lot of pages.

4. ldf size increase in cache disabled case is almost 8 times the increase in the cached case. Why is this?

Best Answer

Your tests are flawed. Your data sizes are too small to produce any meaningful results. CHECKPOINT/DROPCLEANBUFFERS after every query is giving results that would never correlate with reality.

As for your questions:

  1. Assume a simple query run in a loop:

    UPDATE questions SET up_votes = up_votes + 1 WHERE id = @id

This query will generate a small transaction log entry for each query, and dirty the page it hits. If you update the same row 1000 times, you could generate 1000 small transaction log entries with one 8k write (at some point later). If you issue a CHECKPOINT, you will force the dirty page to be written to disk after every query (at least 1 additional 8k write, maybe more depending on indexes). The DROPCLEANBUFFERS will force the next query to issue multiple 8k reads. (the data, the intermediate index levels, GAM pages, SGAM pages, PFS pages, etc, etc, etc)

  1. No. Even if you have a 500 GB database and 8 GB of RAM, a lot of meta-data and intermediate level index pages will remain in memory and are critical for performance.

  2. See above about GAM/SGAM, etc.

  3. CHECKPOINTS will generate a trivial amount of additional log activity. You are still squarely in the "background noise" range with your data volumes to evaluate log growth in any meaningful fashion.

As an analogy, It is like attempting to measure the gas mileage in a vehicle just by backing out of the driveway.

If you are concerned with SAN performance, then look into Microsoft's DiskSpd.exe tool (formerly SQLIO). The documentation has sample configurations that relate to known SQL Server access patterns.