Sql-server – Oversize non-clustered index

indexsql server

I have a table with 220 rows. It has about 20 columns. One of the non-clustered indexes has a datetime as the key value and includes a unique identifier and a varchar(128). The clustered PK is a bigint.

The datetime for every row is updated very frequently (40 times a second).

If I rebuild the index it is about 200KB but after a couple of hours, it will have grown to about 50MB. (And it appears to keep growing). Given how often it is accessed, scanning 50MB compared with 200KB is hurting a lot. SQL 2016 SP2 13.0.5201.2

Can anyone explain, or point me to something that explains, how an index with only 3 columns, on a 200 row table can get to be 50MB? And even better anything I can do about this? (The 40 updates a second is not something happy with but I haven't been able to convince anyone that this is excessive yet).

Unfortunately, in-memory is not an option. Nor is changing the code.

Edit: Added output from sys.dm_db_index_physical_stats function as suggested by Andrew Sayer
Output from sys.dm_db_index_physical_stats function. requestsed by Andrew Sayer

Possibly related info – Server is not in an availability group. Allow Snapshot Isolation is false but is_read_committed_snapshot_on is true.

Best Answer

This should not be possible, from an architectural standpoint:

The non-clustered index has 220 rows. Even if you have only one row per page in your index, you would have only 1.7 MB (plus a little bit in the non-leaf levels, but that is marginal). So, something is "not normal" here - this should be impossible.

Are you sure you aren't looking at the size of the data, and that the table is a heap table? SQL Server has a well known issue with not free up space for heaps. That would explain it. The solution is to not have a heap. I see, however that you don't have a heap, since you mention that the clustered PK.

A very far-fetched theory is that SQL Server's meta-data doesn't reflect reality regarding the space used by the index. We had that prior to SQL Server 7.0 (or was it 2005 that this was fixed?). We refreshed the meta-data using DBCC UPDATEUSAGE. You can of course use this, just in case. It doesn't explain bad performance, but it is a quick thing to verify. I.e., the theory is that you are exposed to some code-path in SQL Server where this issue (non-correct space usage meta-data) isn't handled.

If this was me, I would first rule out the meta-data aspect. Look at space usage using several methods, just to verify that they agree on the size. And last resort would probably be to see what is really there using commands such as DBCC PAGE, sys.dm_db_database_page_allocations() and such...

(Come to think of it, row versioning could potentially explain some extra space usage, but that would be in tempdb...)