A single transaction log file has both a physical size (that you see on disk), and it's also broken down within the physical file into logical sections called virtual log files (VLFs).
Both auto-growth and auto-shrink operate on the physical transaction log file.
Transaction log truncation (also called "log clearing") operates on the logical sections of the transaction log (VLFs), and does not affect the physical file size. This part is frequently the subject of confusion.
A log file must always grow to accomodate a large transaction; turning off auto-shrink will leave the log file with its maximum needed size, instead of physically decreasing its size.
If you don't have large transactions, it will be safe to disable auto-shrink; the log files will not grow without bound like would happen if the database was in FULL
or BULK_LOGGED
and you weren't taking transaction log backups.
This behaviour is the same for SQL Server 2005+.
Seems like an appropriate way to do it.
Create a logging table:
CREATE TABLE dbo.LogSpace
(
dt DATETIME NOT NULL DEFAULT SYSDATETIME(),
dbname SYSNAME,
log_size_mb DECIMAL(22,7),
space_used_percent DECIMAL(8,5),
[status] BIT
);
Do this before and after your load:
INSERT dbo.LogSpace(dbname, log_size_mb, space_used_percent, [status])
EXEC sp_executesql N'DBCC SQLPERF(LogSpace) WITH NO_INFOMSGS;';
Optionally, remove any rows not related to this specific database:
DELETE dbo.LogSpace WHERE dbname <> N'yourdb';
Then you can compare the before and after size/space used for any given date, or for all dates you have collected.
;WITH x AS
(
SELECT dbname, dt,
duration = DATEDIFF(SECOND, LAG(dt) OVER
(PARTITION BY dbname ORDER BY dt), dt),
[current] = space_used_percent,
previous = LAG(space_used_percent) OVER
(PARTITION BY dbname ORDER BY dt),
rn = ROW_NUMBER() OVER
(PARTITION BY dbname ORDER BY dt),
log_size_mb
FROM dbo.LogSpace
)
SELECT * FROM x WHERE rn % 2 = 0;
Keep in mind that checkpoints that happen during your process can actually make log space be re-used; I remember doing some performance testing recently and after certain operations the space_used_percent
actually went down. So you may want to take the max observed over a few days (and maybe run it more often - in which case you want a slightly different query, that doesn't assume pairs of consecutive rows are related to any specific activity), rather than just relying on how it ended up after the load.
Also make sure that the autogrow settings for the log file are reasonable - you don't want 1MB or 10%, but you don't want 10GB, either. Since an autogrow event for a log file will (a) make all transactions wait and (b) does not benefit from instant file initialization, you want a good balance between how many times the log file has to grow during an abnormal operation like your data cleanup, and how long it takes any individual growth event to happen. If that event was recent enough, you can review these events in the default trace to see how long it took then.
Best Answer
Nothing is logged by default, generally speaking. So, unless you have something keeping track of the amount of log generated, you are out of luck.
Ordinarily, we would recommend you to check the size of your log backups, but as I understand it, you don't have those and the purpose here is to determine how much space you would need for those. Catch 22. :-)
Even with some active monitoring, you would have to check this pretty frequently, since you never know when checkpoint kicks in and "truncate the log". There's a perfmon counter you can use for this: "Log File(s) Used Size (KB)". And you can grab the value from sys.dm_os_performance_counters. I.e., pretty easy to create a job that uses a SELECT to check the used space in the tlog for selected databases, store it in a table and execute that job every minute, or whatever frequency you find suitable.