Sql-server – Transaction log queries

sql servertransaction-log

I have used these queries to access transactions from an SQL Server database

Use Action
Go
Select * from fn_dblog(null,null)

This query works but only loads 7 lines when one table in database holds over 100,000 records
Next I used this query to recover history

SELECT  * FROM fn_dump_dblog(NULL,NULL,'DISK',1
,'D:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\Backup\Action.bak'
,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL
,NULL,NULL,NULL,NULL,   NULL,NULL,NULL,NULL,NULL,NULL
,NULL,NULL,NULL,NULL,   NULL,NULL,NULL,NULL,NULL,NULL
,NULL,NULL,NULL,NULL,   NULL,NULL,NULL,NULL,NULL,NULL
,NULL,NULL,NULL,NULL,   NULL,NULL,NULL,NULL,NULL,NULL
,NULL,NULL,NULL,NULL,   NULL,NULL,NULL,NULL,NULL,NULL
,NULL,NULL,NULL,NULL)

This query worked without error but only recovered 37 lines. If I restore the backup there are more than 100,000 records in one table. I would expect to see hundreds or thousands of lines when I run these queries.

The only logical answer would be that the transactions in the file are limited. But the file size for the transaction log is 1.5 GB. Is there any command I can run to check the number of transactions in the log file?

Can anyone explain how I can make these queries work to recover every transaction in the transaction log?

Best Answer

This query works but only loads 7 lines when one table in database holds over 100,000 records

The thing with sys.fn_dblog is that it only shows you the active portion of the log. If you wanted every transaction that ever happened through means such as this, I might suggest something more useful of your time.

This query worked without error but only recovered 37 lines. If I restore the backup there are more than 100,000 records in one table. I would expect to see hundreds or thousands of lines when I run these queries.

A single log backup doesn't hold everything that ever happened, you'll need every log backup ever taken. If your database was ever in the simple recovery model, or if it was in the full recovery model and anything was accomplished in it before a full backup was taken then you're not going to get everything back. Period.

Just because there are 100k records in the table doesn't mean there were 100k transactions, either. It could have been one big transaction, it could have been 2 transactions that inserted 200k and then a 3rd transaction that removed 100k, etc. ad nauseam.

The only logical answer would be that the transactions in the file are limited.

Yes in terms of what you can read at any given point as the log file can be thought of as a giant ring buffer that writes over the older inactive portions. It's circular and will continue re-using inactive portions unless it fills up and must grow because all of the current space is active.

Is there any command I can run to check the number of transactions in the log file?

You already have them, you just need to do this for every log backup ever taken across all of the time the database has been in existence.

Also, this doesn't take into account actually reading the log structure which isn't publically documented. This doesn't even include the newer log structures for things like Hekaton which have their own structure inside of the log structures or any of the trace flags you'd need to set certain behaviors.

All-in-all you're better off doing something else with your time. If this is needed as part of database recovery then I'd employ some professionals. If this is purely for academic then I would suggest creating a very small database, putting it in the full recovery model, taking a full backup, and not to take a log backup until you're done looking at that part of the log.