Sql-server – Index rebuild, data compression generated Large TLog

indexsql serversql server 2014

We are following vendor recommendations for a index rebuild changing the fillfactor and enabling data compression, on the test server the results have us wondering what is going on that would cause the TLog to exceed 650 GB, (ran out of drive space). Why is the tlog growing so much?

SQL Server Version 12.0.2456.0

Vendor code provided:

alter index all on dbo.ACC_LOG_DTL_IX rebuild with   
( data_compression=PAGE, fillfactor=85, online=ON)

Table properties

This is the script for the table and the one index.

CREATE TABLE [dbo].[ACC_LOG_DTL_IX](
    [ACCESS_INSTANT] [numeric](16, 6) NOT NULL,
    [PROCESS_ID] [varchar](254) NOT NULL,
    [DATA_MNEMONIC_ID] [varchar](15) NOT NULL,
    [STRING_VALUE] [varchar](2000) NULL,
    [INTEGER_VALUE] [numeric](18, 0) NULL,
 CONSTRAINT [PK_ACC_LOG_DTL_IX] PRIMARY KEY CLUSTERED 
(
    [ACCESS_INSTANT] ASC,
    [PROCESS_ID] ASC,
    [DATA_MNEMONIC_ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]

Best Answer

Regardless of your recovery model whenever an operation (index rebuild or reorganize) modifies a data page that page modification is put into the log. In your case changing the fill factor can add more data pages, and move data to new pages. SQL Server has to make room for the new data and to do that it has to keep track of the page changes. The more pages that are changed, the more entries will be put into the transaction log.

In your case I would ask, what storage are you using, and what is your fragmentation like? If you're using flash storage, I'd tell you that rebuilding indexes becomes much less important than it does if you're using magnetic storage.