Queries can start to slow down over time for a few reasons and you rebuilding the indexes can be fixing the problem a few ways. I'll share some of the more common reasons in my experience but there could be other causes as well. My guess is you are suffering from one of these issues.. I've also asked some questions as a comment to your question to see if we can get more details. But a few thoughts:
Statistics Getting Stale SQL Server maintains column and index statistics. These essentially tell the Query Optimizer how your data is distributed. This information is critical to the optimizer in choosing the right access method for data (Seek vs Scan) and then choosing the join method being used. If you have auto update statistics enabled (default setting in SQL.. At the database level) these get recomputed, but only when "enough" data changes. So if you have some inserts into your table but never manually update statistics and the inserts/updates are not enough to trigger an auto stats update you could be suffering from poor plans for your data distribution... Rebuilding your indexes also recomputes your index statistics I would create a job to manually update statistics on a regular basis, this is a best practice anyway - and the next time this happens try and just run sp_updatestats
in your database and see if you notice a difference
Query plan issues You could be suffering from parameter sniffing - basically the first time a query runs one value is passed in - the query gets optimized for that value. When you next run it with a different value that would benefit from a different query plan, it suffers with the original query plan resulting in a slow query. When things run slow for the app - are they also slow if you run the same query in SQL Server Management Studio? If it is fast in SSMS but slow in the app - that can be a good sign pointing towards parameter sniffing. If it is consistently slow across the board over time for all queries and regardless of parameters, then I wouldn't look here. This article talks quite a bit about parameter sniffing.
Not enough memory/too many ad hoc plans It sounds like you are sending ad hoc SQL to SQL Server. This can bloat your plan cache sometimes, especially if you have a separate plan for each execution of a query. Depending on the memory on your server, this can also lead to the issue. How much memory is on your server? Check out this link on the problem with single use plans. You don't have a lot of great solutions in SQL Server 2005 for this problem, if you have it. If you can recreate this problem in a non-prod environment, I would suggest running DBCC FREEPROCCACHE
in your non-prod environment if this happens again. Please note! This is an instance wide setting, if you do this on production - any stored query plans in cache for any database will no longer be there. It means you have to "pay" for compilations again. If you have high concurrency and a busy system, this could prove to cause issues. If this is the only real database and you are suffering from performance issues anyway, it doesn't hurt to try this in production.. If you have other Databases and just want to do it for this database, this blog post explains how to approach a clear for just one DB.
Index Fragmentation - It is possible that index fragmentation is the actual issue here, but I'm surprised it gets so bad so quick. If your tables are clustered on a key that causes fragmentation quickly and you have a lot of inserts, this could be the case. It would be made much worse if you were underpowered in terms of memory and disk IO. Setting up a job to rebuild/reorganize your indexes on a regular basis would be good. Based on your answers to some questions in the comments above there may be other things to do to minimize the impact of this.
Based on your comments, here is a maybe faster version:
SELECT TABLE1.* , TABLE2.COL1 , TABLE2.COL4
FROM TABLE1
INNER JOIN TABLE2 ON TABLE1.COL4 = TABLE2.COL2
WHERE TABLE1.COL1 IN ('1','TEXT','HO TED')
AND TABLE1.COL2=1800
AND TABLE1.COL3>=100
AND TABLE2.COL1=10519198
AND TABLE2.COL3=('TEXT')
And you need to create a tree-index on TABLE1(COL2, COL1, COL3)
. Beware: hash-based index weren't o.k, because of your condition of TABLE1.COL3>=100
is an interval-query, which is impossible to be fastened by hash-based indices! This is also the cause, why must be COL3 the last element of the TABLE1(COL2, COL1, COL3)
index.
Another index which can make things better were a hash-based index on TABLE2(COL1,COL3)
.
Further indices which will probably help: TABLE1(COL4)
and TABLE2(COL2)
(both).
None of the indices should be unique.
Best Answer
It will take marginally longer to generate the plan when there are more indexes to consider, but I doubt the difference would be measurably significant. The reasons for dropping an index does not list query performance. On the other hand, in general you shouldn't create indexes unless you know they will be used to make a query more efficient.
From the Oracle Concepts Guide, here are the criteria for creating an index.
All indexes will be considered in the sense that all indexes on the tables in the query are examined to determine whether they could be used. Those that could be are further examined to determine usefulness.