Sql-server – SQL Server 2008, cpu 100% until recreate an index

sql-server-2008

I have a 4000000 lines table (SQL Server 2008). This table has a private key composed of two int columns. One corresponds to a PK in another table of the same database. There is a relation between them. The other to a PK of a table in another database. (I don't know if it is important)

Sometimes, there are the same amount of queries as usualy, but the CPU usage grows to 90-100%. And some stored procedures on this table are too long. At this moment, just SELECT queries run on this table.

If I wait, the CPU usage gets normal after 1 or 2 hours.
But I also can drop and recreate the PK index. And the CPU usage gets normal immediately. So I think the problem is related to this index. Am I wrong ?

But the index is not fragmented. I rebuild it every night in a maintenance plan. I even tried to reorganise it after each big insert or delete operations. I also tried to update its statistics. And I also tried to do nothing at all.

I get this problem randomly every day but at different hours, without any way to reproduce it. So I don't know how I could compare the execution plans.

Thanks,

Loic

EDIT : This table is populated with a SSIS package. It inserts missing lines and delete deprecated lines (no update). The insert is done by fast load within an oledb destination limited to 500 lines per batch. It does a bulk insert.
I tried adding several things at the end of this package including reorganize indexes and update statistics.

The issue does not occure during the execution of this SSIS package. It can be for example 15 minutes after its end. It can occure even if there ar few lines manipulated by the package (50 lines for example).

Best Answer

If sounds to me like your statistics are becoming stale which is causing the SQL Server to use a crappy execution plan to get to the data. Can you post the execution plan for when the query is working well and when it is working poorly?

As the table has 4M rows in it, the statistics won't update automatically until ~800,500 rows have been inserted/update/deleted so if you pump in 600k rows the stats will stay the same and the execution plan won't be correct for the current data distribution anymore.

By dropping the index and recreating it you are effectively updating the statistics.

Next time the problem happens try updating the statistics manually using the UPDATE STATISTICS statement to see if that does the trick. If it does then that's the problem and you need to look into writing a job which updates the statistics on a regular basis.