Did a manual failover after windows updates to passive node today and recovery was fast for all but one db. The one slow db had a transaction log size of 203200.00 mb. Log Space percent used = 0.25. Will the large transaction log file size slow down recovery?
Sql-server – Will large transaction log file slow db recovery
sql serversql server 2014
Related Solutions
How to reduce the size of transaction log file without shrink process (Because it’s Bad – Increases Fragmentation – Reduces Performance)
That is not true. log shrink is quite benign, you are thinking data shrinks. See How to shrink the SQL Server log for an explanation why it grows, how to shrink it and why is benign.
My first recommendation is to use a smart index maintenance plan. Overactive rebuild is expensive, harmfull and completely unnecessary. Many swear by Ola Hallengren's index maintenance scripts.
You must also look into leveraging minimal logging. Index maintenance are a prime candidate for minimally logged operations, but you must use enable them by using the bulk-logged recovery model for your database. See Operations That Can Be Minimally Logged:
If the database is set to the simple or bulk-logged recovery model, some index DDL operations are minimally logged whether the operation is executed offline or online.
I want to shrink the 10GB file to a sensible size (maybe 200MB with autogrow 5 MB?).
IMHO, 5MB autogrowth is not sensible. You should monitor your log growth using default trace and find a good value for autogrowth.
Below script will help you monitor your Log (and Data file) autogrowths. Trend it over a week (or whatever suits your need). This will help you get a "near to best number" for your autogrowth setting.
IF OBJECT_ID('tempdb..#autogrowthTotal') IS NOT NULL
DROP TABLE #autogrowthTotal;
IF OBJECT_ID('tempdb..#autogrowthTotal_Final') IS NOT NULL
DROP TABLE #autogrowthTotal_Final;
DECLARE @filename NVARCHAR(1000);
DECLARE @bc INT;
DECLARE @ec INT;
DECLARE @bfn VARCHAR(1000);
DECLARE @efn VARCHAR(10);
-- Get the name of the current default trace
SELECT @filename = CAST(value AS NVARCHAR(1000))
FROM ::fn_trace_getinfo(DEFAULT)
WHERE traceid = 1 AND property = 2;
-- rip apart file name into pieces
SET @filename = REVERSE(@filename);
SET @bc = CHARINDEX('.',@filename);
SET @ec = CHARINDEX('_',@filename)+1;
SET @efn = REVERSE(SUBSTRING(@filename,1,@bc));
SET @bfn = REVERSE(SUBSTRING(@filename,@ec,LEN(@filename)));
-- set filename without rollover number
SET @filename = @bfn + @efn
-- process all trace files
SELECT
ftg.StartTime
,te.name AS EventName
,DB_NAME(ftg.databaseid) AS DatabaseName
,ftg.[FileName] as LogicalFileName
,(ftg.IntegerData*8)/1024.0 AS GrowthMB
,(ftg.duration/1000)AS DurMS
,mf.physical_name AS PhysicalFileName
into #autogrowthTotal
FROM ::fn_trace_gettable(@filename, DEFAULT) AS ftg
INNER JOIN sys.trace_events AS te ON ftg.EventClass = te.trace_event_id
join sys.master_files mf on (mf.database_id = ftg.databaseid) and (mf.name = ftg.[FileName])
WHERE (ftg.EventClass = 92 -- Data File Auto-grow
OR ftg.EventClass = 93) -- Log File Auto-grow
ORDER BY ftg.StartTime
select count(1) as NoOfTimesEventFired
, CONVERT(VARCHAR(10), StartTime, 120) as StartTime
, EventName
, DatabaseName
, [LogicalFileName]
, PhysicalFileName
, SUM(GrowthMB) as TotalGrowthMB
, SUM(DurMS) as TotalDurationMS
into #autogrowthTotal_Final
from #autogrowthTotal
group by CONVERT(VARCHAR(10), StartTime, 120),EventName,DatabaseName, [LogicalFileName], PhysicalFileName
having count(1) > 5 or SUM(DurMS)/1000 > 60 -- change this for finetuning....
order by CONVERT(VARCHAR(10), StartTime, 120)
-- Report back the results
select * from #autogrowthTotal_Final
order by TotalDurationMS desc
-- optional Filters
-- where DatabaseName = ''
Why shouldn't I use autogrow if a log file is really being recreated after a differential backup?
This is not true. When you take "Log Backups", SQL Server - for databases in proper FULL or BULK_LOGGED recovery models will truncate the transaction log i.e the portion of transaction log will be marked as "no longer needed" and will be overwritten.
Read up more on : Misconceptions around the log and log backups: how to convince yourself and 8 Steps to better Transaction Log throughput
I want to shrink the 10GB file to a sensible size (maybe 200MB with autogrow 5 MB?). How can I do it and what are the consequences of shrinking the file?
You can use DBCC SHRINKFILE(yourdb_log, 200);
. Remember to use SHRINKFILE
and NOT SHRINKDATABASE
.
I would suggest you to be PROACTIVE (monitor your log autogrowths and give a sensible value for autogrowth) rather being REACTIVE (frequently shrink log file, because you think that you will need disk space (or for whatever reasons)).
You should read up answers from Mike Walsh and Aaron Bertrand for Why Does the Transaction Log Keep Growing or Run Out of Space? to understand the consequences of shrinking log file.
Related Question
- Sql-server – backup log larger than log file
- Sql-server – Should I shrink the Log File
- Sql-server – Unexpected growth Transaction log file 100 GB which is part of Transactional Replication
- SQL Server – Why Transaction Log File Grows Large After Disabling Replication
- Sql-server – way to get a history of transaction log used space in SQL Server
- SQL Server – How to Troubleshoot Large Transaction Log File Size
Best Answer
Not directly, but if there was a large transaction running at the time of failover that could cause both a large log file and a slow failover.