Sql-server – Disk size and log file growth during upgrades

disk-spacesql-server-2012transaction-logvldb

I have four SQL Server databases. Per the maximum load requirements, each database file can be between 1 and 2 TB in size. The HDD is a multi-disk DAS with about 9TB of space. During a database upgrade, the transaction log file for a single database can grow up to 4TB in size. (yeah, I added a new column to a multi-billion row table.) The databases are on Simple Recovery model.

I can find no recommendation (or rule of thumb) from Microsoft about how much disk space I need for a given database size. I get that it can vary widely. If my understanding is correct, it's the largest transaction which I need to consider since space internal to the log file is re-used for different transactions. I think the log growth is set to 10%, but even if it were set to 1 or 2 Gig increments, I think I'd still be cutting it close.

Is there some strategy I need to employ to avoid such huge log files or do I simply need more disk space to avoid running out of space during an upgrade? If I'm doing something wrong, I want to learn how to do it the proper way.

I'm using SQL Server 2012.
Thanks!

Best Answer

(yeah, I added a new column to a multi-billion row table.)

Adding a column to a very large table can have implications but there is a clever way of adding column as well.

From : Adding NOT NULL Columns as an Online Operation

Starting with SQL Server 2012 Enterprise Edition, adding a NOT NULL column with a default value is an online operation when the default value is a runtime constant. This means that the operation is completed almost instantaneously regardless of the number of rows in the table.

Adding a NOT NULL column with a default value that is not a runtime constant is always performed offline and an exclusive (SCH-M) lock is acquired for the duration of the operation.

Enterprise edition (works on 2008 R2 standard edition as per comment) has ALTER TABLE source_table SWITCH TO new_destination_table

The databases are on Simple Recovery model.

In simple recovery model, only a CHECKPOINT will truncate the log.

If you are recording autogrowth metrics during your typical upgrade, then an average of those collected metrics would give you a good starting figure. This script will help you get started (you would need default trace enabled and running on your server).

I think the log growth is set to 10%, but even if it were set to 1 or 2 Gig increments, I think I'd still be cutting it close.

I would suggest you to change the autogrowth setting from Percentage to Fixed MB. As @AaronBertrand says in his comment :

The problem is that it takes longer and longer over time, since 10% of a growing file is also constantly growing itself - it's like compound interest, but you're paying, not receiving.

For completeness, make sure you have Instant file initialization enabled, so the data file autogrowth can take advantage of it.