You've got two questions in here:
Is the above data sufficient information to base an index rebuild
decision upon or would I need to have additional metrics involved?
One missing link is the size of the index. If you're talking about an object with less than, say, 1000 pages, then index rebuilds aren't all that critical.
Another missing link would be the churn of the index. Typically I see filtered indexes used when they're a very, very small subset of the entire table, and the subset changes fast. Guessing by the name of your filtering field (OfferStatus = 0), it sounds like you're indexing just the rows where you haven't made an offer yet, and then you're going to immediately turn around and make an offer. In situations like that, the data's changing so fast that index rebuilds usually don't make sense.
Or, for filtered indexes do we even care about fragmentation and
should just explicitly update statistics at X interval?
SQL Server updates stats on objects when ~20% of the data changes, but filtered indexes & stats are a special case. They're also updated when 20% of the data changes - but it's 20% of the base table, not 20% of the filtered subset. Because of that, you probably want to manually update stats on them periodically. I love Ola Hallengren's maintenance scripts for this - the index maintenance stored proc has a parameter for updating statistics, and another parameter for choosing what level of sampling you want, and another parameter for choosing whether to update stats on all objects or only the ones with changed rows. It's fantastic.
As you perform inserts updates and deletes, your indexes will become fragmented both internally and externally.
Internal fragmentation is you have a high percentage of free space on your index pages, meaning that SQL Server needs to read more pages when scanning the index.
External fragmentation is when the pages of the index are not in order any more, so SQL Server has to do more work, especially in IO terms to read the index.
If your indexes become too fragmented, at best, your queries will be less efficient but at worst, SQL Server will just stop using the indexes all together, meaning virtually all queries would have to perform a table scan or clustered index scan. This will hurt your performance a lot!
When you reorganise an index, then SQL Server uses the existing index pages and just shuffles data around on those ages. This will alleviate internal fragmentation and can also remove a small amount of external fragmentation. It is a lighter weight operation than rebuild and is always online.
When you rebuild an index, SQL Server actually resorts the data of the index and uses a new set of index pages. This will obviously alleviate both internal and external fragmentation but is a more heavy weight operation and by default causes the index to go offline, although it can be performed as an online operation, depending on your SQL Server version and settings.
Please do not expect to have 0 fragmentation after a Rebuild however. Unless you use a MAXDOP query hint, SQL Server will parallelise the rebuild operation and the more processors involved, the more fragmentation there is likely to be, because each processor or core, will rebuild their section or fragment of the index individually, without regard for each other. This is a trade off between best fragmentation levels and time taken to rebuild the index. For near 0 fragmentation, use MAXDOP 1 and sort the results in TempDB.
Best Answer
Under the covers
sp_indexoption
just does anALTER INDEX
as well.My recommendation would be to forget about
sp_indexoption
and just utilizeALTER INDEX
:As noted in the BOL reference for
ALTER INDEX
, when you specify just the set options: