SQL Server – Performance Degradation Over Time Until Restart

memoryperformancesql serversql-server-2008-r2

We have a database with a mixed OLAP/OLTP workload. The queries are quite ad-hoc and are dynamically created in the mid-tier application server. When we start the server, the performance is quite acceptable, but the memory consumption gets more and more until all available memory (30GB) is exhausted. After that, the system gets slower and slower.

Commands like Dbcc freeproccache have no effect.

There are not many transactions in select * from sys.dm_tran_session_transactions (not more than when the system is fine), some times this list is empty.

The first result of dbcc memorystatus is

VM Reserved               42136628
VM Committed               1487176
Locked Pages Allocated    24994048
Reserved Memory               1024
Reserved Memory In Use           0

A restart of SQL Server solves the problem for a while.

  1. What causes this behavior? How can it be avoided?
  2. If a real solution for the cause is too difficult, is there a command that forces SQL Server to actually release all memory without a complete DBMS restart?

The server is running on dedicated hardware (not a VM). We had some scheduled jobs, but we disabled them for a while, with no change. There are other mid tier applications running on the same server, but they use no more than 2GB memory, negligible CPU, and almost no I/O. We restarted all such applications with no change.

Best Answer

I would suggest collecting performance metrics on this server, so you can eliminate the guesswork from troubleshooting these types of problems. See this article for a more complete guide if you don't know where to start with this.

In particular, I would check the performance counters Memory\Available MBytes and Paging File(_Total)\% Usage because you said the issues only start occurring when the buffer pool is full. The numbers you get back from these counters may indicate that the max server memory setting needs to be adjusted (either up or down) for the amount of physical memory allocated to the server. As I mentioned here, I don't recommend basing the max memory setting on the amount of physical memory except as an educated guess for a starting point. Always measure the result, and adjust from there.

If the amount of free memory is too low (< 500), or the page file usage is over zero, this may indicate that the SQL Server instance is overcommitted: on SQL Server 2008 R2, the max server memory setting only controls buffer pool size, and not other memory usage such as the plan cache. SQL Server also does not care about other applications you may have running on the system. This extra memory usage can put memory pressure on Windows -- or the other applications -- possibly leading to disk swapping. This is something you want to avoid at all cost, particularly if the page file exists on a volume backed by just a simple RAID 1 mirror. My feeling is that this is the issue, and backing off the max server memory setting should fix the problem.

If the amount of free memory is high (> 1000) and the page file usage is zero, you can probably bump up the max server memory slightly (in 256 MB increments) to maximize the memory usage of the server. This most likely won't solve the problem, however, and you'll need to look elsewhere, probably at the physical disk counters and the buffer pool page life expectancy. If queries are thrashing the buffer pool, there's nothing you can do except improve disk performance, increase the amount of physical memory available to the server so all the data pages can fit in memory at once, or modify the database to not take up as much physical space (maybe by using row or page compression, or by rebuilding indexes with a higher FILLFACTOR).

I've published an article on this topic here that goes into more depth about this issue and how to solve it.