I'll be the first to admit the documentation is not very clear on this. They state that you should be offloading your backups to the secondaries, but while most statements are made in a general sense, I think they really mean, specifically, log backups (and copy_only
backups, if you have some need for those).
You should occasionally run full backups on the primary IMHO. The copy_only
restriction is about full backups, not log backups, and only applies to the secondary AFAIK.
You will not be able to use shrinkfile against your current transaction logs because they are full of activity that has technically not been backed up. Once you take a full (non-copy_only
) backup on the primary, then let one log backup run, you should be able to shrink the log file manually. Currently your log backups are working because the database is set to full, but they are continuing to grow because (presumably) you've never run a proper full backup on the primary.
This should be a one-time operation, and you shouldn't shrink them too small; you need to set your primary to be backing up regularly, and you need to accommodate for the largest set of activity that will occur between full or log backups. I won't go on and on about how shrinking files only so they can grow again is a futile exercise and guarantees poor performance, but I could. :-)
It sounds like you've got a huge transaction that has remained open which has done a lot of work. Run DBCC OPENTRAN on the database and see how long the oldest transaction has been open for. You'll probably need to kill that transaction (or have the user commit if it's someone in Management Studio). Then the log will clear automatically.
Best Answer
Reading transaction log using fn_dblog is not supported by Microsoft. So you should avoid running this command on production server. You can run this on UAT/Test environment.
What do you exactly want to read from output of fn_dblog, can you be precise. There are myraids of information you can get from output of fn_dblog. Its not easy to read the output and one requires some level of expertise to read it.
I will give you a Demo.
--Then use below query to filter out records related to the transaction
You can run
To see all column names which fn_dblog would produce
Example: I would show what all things are logged when simple update command is fired
Its has one row for begin tran one for commit and one regarding update operation. You can see LOP_BEGIN_XACT for beginning of transaction and COMMIT for commit of transaction.
LOP_MODIFY_ROW says a row was modified like we did in update statement.
LCK_Clustered index is in picture because table had CI and row must have been exclusively locked for update
Then comes hexadecimal page ID which actually says what page had row which was modified
Then last column Includes the lock Information. The lock which was taken on
HoBt 72057594041860096:ACQUIRE_LOCK_IX OBJECT: 8:613577224:0 ;ACQUIRE_LOCK_IX PAGE: 8:1:344 ;ACQUIRE_LOCK_X KEY: 8:72057594041860096 (8194443284a0)
Database ID=8 file number 1 and page 334. You would also see KEY value
There is also a log sequence number for every transaction and column Log Record Length which would tell you LSN and size of log record for particular operation
This is just a bit of information. There is too much to derive from content of transaction log and it would be difficult to write down all here. But I can suggest you various blogs of Paul Randal where you can find him writing about how to take help of the output of fn_dblog to get various information.
Time Pass with Transaction Log
Another dive into Transaction Log
Read various articles by Paul you would learn a lot about contents of transaction log
Tracking page split using Transaction Log
How to tell who changed Log file characteristics
Using fn_dblog,fn_dump_dblog and restoring STOPBEFOREMARK using LSN