In investigating fragmentation on some of our large tables in a SQL Server 2005 database, I was using a query based on sys.dm_db_index_physical_stats. The query returns a row for each level in the b-tree for an index. In evaluating the clustered indexes, I noticed that fragmentation increased the further up the tree I went. At the leaf level, fragmentation was only 1%, but at the root level, it was 50% fragmented or greater. Here's an example:
TableName IndexName avg_fragmentation_in_percent record_count
MyTableName MyClustered_Index 91.66666667 1,901
MyTableName MyClustered_Index 35.77064703 528,681
MyTableName MyClustered_Index 1.948812233 3,616,888
So looking at this, I was curious: is the fragmentation I am seeing at the upper levels because the pages that the index values are occupying are acting the same way that really small tables do, occupying mixed extents, or something similar to that?
If that is not the case, do I need to be concerned with those fragmentation numbers in the upper levels, or is the leaf level fragmentation the only thing that is significant here?
Thanks for your time.
EDIT 1:
Adding the query results with page counts:
TableName IndexName avg_fragmentation_in_percent page_count
MyTable MyClusteredIndex 0.2687447 823086
MyTable MyClusteredIndex 6.389452333 1972
MyTable MyClusteredIndex 90.90909091 11
Best Answer
Fragmentation in non-leaf levels is typically not an issue due to the difference in the number of pages between the leaf and non-leaf levels. For example, here are the number of pages per level in a fairly ordinary 8 million row table with an int primary key:
The 659 non-leaf pages have a very good chance to remain in buffer pool because any seek into the table starts at the root and walks down the tree. Even if they did need to be pulled from disk and it was completely random 8k I/O, they still make up less than one half of one percent of the table.