Fill factor only comes into play when an index is created or rebuilt. It is the amount of consumption for the index of the leaf level pages that are filled during these operations. (see the note below for more clarification on affected page levels)
When there is a DML command to data (INSERT
, UPDATE
, and/or DELETE
), it will happen to the corresponding affected indexes. In other words, if you have a page that is 20% filled and you insert data into that page, the page will contain more than 20% of the data (let's say 35% just for example's sake). Do another insert, now the page is 64% filled. Rebuild the index, and the leaf level pages will now relatively contain the space percentage that you specify (or implicitly the default value for the server).
(Note, when you don't specify PAD_INDEX
to be ON
, fill factor is only applied to leaf level pages. But when you set PAD_INDEX = ON
, the fill factor will be taken into account for intermediate-level pages of the index. The default is OFF
)
The reason to adjust fill factor (instead of using the default 100/0) is so that you minimize page splits when inserting or updating data. But keep in mind, nothing is for free. The lower the fill factor, the more space data would normally take up. If you keep an 80% free page space for your indexes, they will consume a relatively larger amount of disk space, which can lead to more reads.
From my understanding, when the data is inserted there will be roughly 20% of data in the pages. When the data is updated, however, it will expand to more than 20% of the index, upto filling it and generating a page split, right?
When data is inserted it will insert into the appropriate indexes at the appropriate page. This could and most likely would very well cause the page consumption to be higher than the fill factor.
A page split will happen when new data is added to a full index page. SQL Server will then split the page and approximately place half of the data from the full page into a new page. Again, fill factor does not come into play here.
A legitimate reason to lower fill factor would be to minimize page splits, therefore minimizing index page fragmentation.
The answer by ThomasStringer is very good and the usage of partitioning here would likely help overall performance and decrease your maintenance cost - however, it won't do anything for your page split scenario.
Can you verify that your FK's are trusted? This won't decrease or impact your page splits but it will help your query execution time.
select 'ALTER TABLE ' + s.name + '.' + OBJECT_NAME(o.object_id) + ' WITH CHECK CHECK CONSTRAINT ' + i.name + '
GO'
FROM sys.foreign_keys i INNER JOIN sys.objects o ON i.parent_object_id = o.object_id INNER JOIN sys.schemas s ON o.schema_id = s.schema_id WHERE i.is_not_trusted = 1 AND i.is_not_for_replication = 0
-- Foreign Key Check... Will build the statement below to CHECK (validate) the FK for those FK's that are untrusted.
UNION
select 'ALTER TABLE ' + s.name + '.' + OBJECT_NAME(o.object_id) + ' WITH CHECK CHECK CONSTRAINT ' + i.name + '
GO'
from sys.check_constraints i join sys.objects o on i.parent_object_id = o.object_id join sys.schemas s on o.schema_id = s.schema_id where i.is_not_trusted = 1 and i.is_not_for_replication = 0
-- Check Constraint Check... Will build the statement below to CHECK (validate) the check constraints for those check constraints that are untrusted.
-- !!!! -- The output (in TEXT) will look similar to what you see below. once you have the output, put it into the query window and execute it.
Another idea you already hit on, using fill factor - if you use a low fill factor and combine that with either row or page level compression, your data density, per page will increase. Again, this won't help your page split problem, BUT it can decrease the quantity of page splits (due to having a higher data density per page).
Lastly, if you can, look at changing your index and lead with the log_time column instead of the foreign key column. This change, depending on how the log_time data comes in (hopefully it's more "in order" than the FK column). This could decrease your page splits significantly. If you pair this with both compression (ROW) and a good partition scheme, you might see some significant improvements.
Best Answer
You are correct, but this is actually caused by the fill factor value. You cannot say that now fill factor is not 100 (assuming your case) that would be an incorrect statement because fill factor was 100 this page split event was forced. Suppose you have index page which was almost full and new row needs to be accommodated now this would cause page split and move half rows to new page there by causing fragmentation. So what caused this: it was precisely the 100 fill factor value. That is why you must be cautious with what fill factor value you choose
A low, nonzero fill-factor value may reduce the requirement to split pages as the index grows, the index will require more storage space and can decrease read performance
Also note that whatever fill factor value you use you CANNOT avoid page split even if you have ever increasing columns. There are 2 types page splits good and bad. The bad one is when data is inserted in middle creating a gap.
Some information about Full Factor in Paul's Myth Series about Fill Factor
Read more about fill factor
No after restart the value of
fill factor
insp_configure
will not change even if database has many page splits and many pages are half filled. This is called as fragmentation.