Sql-server – Maximum size transaction log and impact

database-sizesql servertransaction-log

I was doing some research on what the recommended maximum size for the transaction log, but I cannot find any substantiate or documented answer. I learned that a log size of about 25 – 50 percent of the DB is not uncommon. I also know that you want to set a recommended number of VLFs and must tune autogrowth to prevent VLF fragmentation. This can slow down your queries and log backups.

The only thing I cannot find is what would be a recommended maximum size in relation to your database size or possibly workload. Of course, you probably don't want your log file to grow beyond the database size, but what would be the exact impact? Of course, log backups will take longer if the log file is huge AND the log file is filled for example 80 percent.

But what if I have a db of 1 TB and a log file of (hypothetically 🙂 ) of 2 TB, which would be filled for 10 percent. I can imagine that log backups will take longer because the log backup has to scan the complete log maybe? If your VLFs are not fragmented, then there is not really a problem right?

Log backups are critical to SQL Server and let's say you would create a log disk that is 2 TB, make the log file 500 GB. Now, the backup fails for a few hours, the log fills for 1.5 TB. Your log backup will take longer, much longer, sure. We have a performance impact because of the autogrowth events, yes, but we made sure the DB does not stop. In such a scenario, when the backup is fixed and the first log backup has run, the log filing is low, but the log is unusually large. Now, I cannot think of any critical reason to get the log file back to the normal ( let's say that 500 GB). Of course you don't want to do that during production hours of course due to locking. However, I cannot imagine that there would be absolutely no impact with such a huge log file (which would be mainly empty).

Edit:

To clear op my a little bit; my question is actually what is the impact of a scenario where a transaction log is huge (let's say 3x the size of the db) but log backups are taken regularly and it will normally be filled for only about 10%. Maybe slower log backups, possible really bad VLF fragmentation?

I don't want to do this. I only want to understand what the impact will be when I find a case where that is the issue (yes, there people who do this…). Of course I want to get it back to normal, but based on the impact, I can determine a priority. High; fix it now not matter what downtime, medium; fix it within a week because it is not critical, low; fix it during the next maintenance window.

Again; iam not interested in how to size it, but what would be the impact in a scenario where the file is already huge AND regular log backups are taken AND it is only filled for max 10% or so. Also, I want to be able to no only say that this is a bad, bad, bad, bad idea, but also why. Hope this helps to clear up my question. Btw, I tried to reproduce this scenario in my home lab, but it is difficult to see what the impact would be.

Best Answer

You are looking at the problem wrong.

Assuming full recover; the size of your transaction log (T-log) is determined by your work load and how often you take log backups.

It is completely reasonable to have a 3 TB database with a 1 GB T-log if there is a lot of old data, that has simple reports run against it occasionally. Backup the T-logs hourly and it may never grow.

In most cases you will want to take T-log backups every one to four hours. How much space is required to hold the T-logs for your heaviest work load in that time defines how big your t-log should be (plus a bit extra just in case).

If you have a virtual server, use a separate drive for for the T-logs. For a brand new build, in most cases I start with 10GB of disk and start the ndf file at 250MB, watch it grow (auto grow 50MB) for a few days or weeks and then decide "What is the right amount of VLFs for my requried log size?" then I adjust them. I recheck about every 3 months.

See my post about VLF files here

Edit

not matter what downtime

There is no down time for the fix, you want to do it off peak hours partly to reduce resource competition, but mostly because you want to get the ndf file as small as possible before you start a planned growth. It is easiest to do this when there is little or no activity.

tune autogrowth to prevent VLF fragmentation

Not really sure what you mean by this. VLF files are written to in sequence. One gets full, the next available in line gets written to.

I can imagine that log backups will take longer because the log backup

Maybe a minuscule impact, but nothing worth worrying about.

impact of a scenario where a transaction log is huge

If you are not currently having performance issues, impact is mostly limited to restore times. Which is more related to the number of VLFs as opposed to the size of the file.

If you are having performance issues, they are likely tied to indexes or something data or procedure related or possibly hardware. If it the performance issue is index related, you want to fix it and get regular index maintenance started, then address VLF files (because heavy index maintenance is hard on t-logs, the first time)

In short, do VLF maintenance when you have the opportunity. It has been ugly for a long time. You are not going to see any magical improvements in daily performance.