Sql-server – Query performance degrades with time and use

indexperformancesql-server-2005

I have been fighting a problem for several weeks now where the performance of my SQL Server queries degrade over a few days of use. In addition every few days, a query will simply not return from my application's ODBC SQLExecute() call.

Rebuilding the indexes manually (in SQL Server Management Studio) "fixes" both problems.

Here is some more information;

  • DB is ~13 Gig in size
  • Using ODBC ver 3.5.1 in the app
  • The apps are C++ with ODBC interface to the SQL DB
  • Test case is on a static DB – no inserts, thus no fragmentation should be possible!
  • Small queries are initially less than 1 sec, and degrade to over 50 seconds
  • Have seen the problem on all (3) machines it's run

I'm having a hard time understanding why the same query works fine for a while, but then starts slowing waaay down??

Any ideas would be greatly appreciated.

Best Answer

Queries can start to slow down over time for a few reasons and you rebuilding the indexes can be fixing the problem a few ways. I'll share some of the more common reasons in my experience but there could be other causes as well. My guess is you are suffering from one of these issues.. I've also asked some questions as a comment to your question to see if we can get more details. But a few thoughts:

Statistics Getting Stale SQL Server maintains column and index statistics. These essentially tell the Query Optimizer how your data is distributed. This information is critical to the optimizer in choosing the right access method for data (Seek vs Scan) and then choosing the join method being used. If you have auto update statistics enabled (default setting in SQL.. At the database level) these get recomputed, but only when "enough" data changes. So if you have some inserts into your table but never manually update statistics and the inserts/updates are not enough to trigger an auto stats update you could be suffering from poor plans for your data distribution... Rebuilding your indexes also recomputes your index statistics I would create a job to manually update statistics on a regular basis, this is a best practice anyway - and the next time this happens try and just run sp_updatestats in your database and see if you notice a difference

Query plan issues You could be suffering from parameter sniffing - basically the first time a query runs one value is passed in - the query gets optimized for that value. When you next run it with a different value that would benefit from a different query plan, it suffers with the original query plan resulting in a slow query. When things run slow for the app - are they also slow if you run the same query in SQL Server Management Studio? If it is fast in SSMS but slow in the app - that can be a good sign pointing towards parameter sniffing. If it is consistently slow across the board over time for all queries and regardless of parameters, then I wouldn't look here. This article talks quite a bit about parameter sniffing.

Not enough memory/too many ad hoc plans It sounds like you are sending ad hoc SQL to SQL Server. This can bloat your plan cache sometimes, especially if you have a separate plan for each execution of a query. Depending on the memory on your server, this can also lead to the issue. How much memory is on your server? Check out this link on the problem with single use plans. You don't have a lot of great solutions in SQL Server 2005 for this problem, if you have it. If you can recreate this problem in a non-prod environment, I would suggest running DBCC FREEPROCCACHE in your non-prod environment if this happens again. Please note! This is an instance wide setting, if you do this on production - any stored query plans in cache for any database will no longer be there. It means you have to "pay" for compilations again. If you have high concurrency and a busy system, this could prove to cause issues. If this is the only real database and you are suffering from performance issues anyway, it doesn't hurt to try this in production.. If you have other Databases and just want to do it for this database, this blog post explains how to approach a clear for just one DB.

Index Fragmentation - It is possible that index fragmentation is the actual issue here, but I'm surprised it gets so bad so quick. If your tables are clustered on a key that causes fragmentation quickly and you have a lot of inserts, this could be the case. It would be made much worse if you were underpowered in terms of memory and disk IO. Setting up a job to rebuild/reorganize your indexes on a regular basis would be good. Based on your answers to some questions in the comments above there may be other things to do to minimize the impact of this.