Here is a slightly simpler approach that avoids the cursor and nested exec:
SET NOCOUNT ON;
CREATE TABLE #to
(
DBName SYSNAME,
FileCount INT
);
DECLARE @v INT;
SELECT @v = CONVERT(INT, PARSENAME(CONVERT(VARCHAR(32),
SERVERPROPERTY('ProductVersion')), 4));
DECLARE @sql NVARCHAR(MAX);
SET @sql = N'CREATE TABLE #ti
(
' + CASE WHEN @v >= 11 THEN 'RecoveryUnitId INT,' ELSE '' END + '
FileId int
, FileSize nvarchar(255)
, StartOffset nvarchar(255)
, FSeqNo nvarchar(255)
, Status int
, Parity int
, CreateLSN nvarchar(255)
);';
SELECT @sql = @sql + N'
INSERT #ti EXEC ' + QUOTENAME(name)
+ '.sys.sp_executesql N''DBCC LOGINFO WITH NO_INFOMSGS'';
INSERT #to(DBName,FileCount) SELECT N''' + name + ''', COUNT(*) FROM #ti;
TRUNCATE TABLE #ti;'
FROM sys.databases
WHERE database_id > 4 AND [state] = 0;
EXEC sp_executesql @sql;
SELECT DBName, FileCount FROM #to -- WHERE FileCount > [some threshold];
DROP TABLE #to;
Transaction log writes are sequential. Only one of the log files will ever be written to at any one time, so having multiple files - in and of itself - can't possibly change your I/O patterns for that database.
Unless you are getting lucky. For example, you've added a second log file to an SSD or otherwise faster or less busy disk, or split the log files across multiple disks and have done so for multiple databases, and you are observing better I/O now because the log has switched to that file on the faster disk, or is more isolated from your other data/log files. In other words, I believe any observed I/O difference is due entirely to other factors and is merely a coincidence, not due to the fact that you added log files alone. SQL Server is explicitly designed to only use one log file at a time - so how could multiple log files possibly improve log write performance, unless the current log file is on faster / more isolated disk? I think you need to provide better empirical evidence (and in doing so you may discover for yourself the true cause of the improved performance).
Please read these posts in full - they were written by a pretty smart guy who worked on the SQL Server storage team for quite some time, so I don't think he's making any of this stuff up for fun:
Also Kimberly Tripp touches on this in a worthwhile article:
Note that none of the 8 steps involves adding a transaction log file. In fact she recommends against it.
There are other perils to having multiple log files, particularly if they are large (think RTO) - and there really is nothing to gain.
Best Answer
What is a virtual log file?
SQL Server divides the transaction log file for each database into smaller chunks, called 'virtual log files' (or VLFs for short). Their primary function is as truncation markers at a log backup, i.e. the SQL Server will only clear (and mark available for re-use) VLFs that are completely empty. MSDN has a piece on Transaction Log - Physical Architecture.
What determines the number of VLFs?
Each time a log file grows (whether via autogrowth or manual growth), the new section of the log file is divided into a number of VLFs, purely based on the size of the new section (the existing transaction log is left alone). So, small autogrowth settings (i.e. the 10% autogrowth that is the default) will cause a large number of VLFs to be created.
What are the implications of a large number of VLFs?
The primary issue a large number of VLFs causes are:
DBCC
since they use database snapshots in the background to facilitate consistency checks without blocking).How can I find out how many VLFs my database has?
DBCC LOGINFO
will return 1 row for each VLF in your database's transaction log. This question has a couple of useful scripts for applying that across all databases on a server.How many VLFs is too many?
That's a judgment call you'll have to make for yourself. My personal rule of thumb is that under 50 isn't worth messing with, and over 100 (or so) and I fix autogrowth settings and make a mental note to (in the next maintenance window) shrink and regrow the log (as below).
Help! I have eleventy billion VLFs and my database recovery takes all day!
Short outline (from Kimberly Tripp's blog):
BACKUP LOG
)DBCC SHRINKFILE
withTRUNCATEONLY
to shrink the log file to the smallest size possible.ALTER DATABASE [...] MODIFY FILE [...] SIZE=newsize
to resize your transaction log back up in a single step**.** Note - if you have a very large log file (tens of GBs or more), you may want to resize in multiple steps to get an appropriate number of VLFs with an appropriate size to avoid excessively 'chunky' log backups. Since VLFs are the unit of truncation they also determine the log backup sizes, as detailed in Kim's blog.