Sql-server – In real terms, how much effect does a bad fillfactor have on performance

performancesql serversql-server-2008sql-server-2012

There's a storm brewing, or maybe I should say firestorm. A customer has recently implemented SQL Server 2012 Always On but that's not the most important bit.

A list of fires to be put out has been arrived at, and one of them is "incorrect fill factor". Someone in their infinite wisdom had decided to recreate a multi-billion-record table with over 10 indexes without specifying the FILLFACTOR for the indexes during the recreation (showing in sys.indexes as 0%).

I have always though that page splitting is bad, but in real terms, how bad is bad? Is it worse than not having the index at all? Is there a concrete argument for putting out this fire before the others? Note: it will take a significant amount of time for each index rebuild.

FYI – SQL Server 2012 Enterprise Edition

Best Answer

This is only a partial answer to your question, because you don't provide enough information on how the database will be used. I also think this question may be more appropriate on the database administrators site.

For instance, if the data is loaded nightly and the database is only used during the day for querying, then you want the fill factor to be 100%. That will be the most efficient approach for querying, minimizing the number of pages.

Page splitting in indexes occurs when a new record lands on a page that is full. The page is split into two pages, which are then, roughly, 50% full. Once the page is split, there is more space for those values.

If the data is being updated with multiple updates or inserts per second, then page splitting is unnecessary overhead. However, on indexes, this will probably happen the first time a new record lands on an index page.

Note that page splitting is going to happen anyway, once pages are filled. This would happen anyway, regardless of the fill factor. The major question is "start-up" time, because initially basically all inserts/updates will result in page splits. But this should quickly dampen.

The overhead also depends on the characteristics of the implementation. If the indexes fit into memory, or if your disk system is solid-state disk, then the overhead might not be noticeable at all.

My guess is that this is not the most important problem, unless consistency of performance when the new database first goes online is paramount.