I currently have a database with an attached CMS. The CMS has about 50+ people with various permission levels that can log in etc… I would like to know if there is a way I can look at the transaction logs (taken every 30 minutes) and see where a particular value was updated on a bit switch. I can then cross reference it with the person who was logged in at the time so I am fine on that part. If someone can guide me on how to read the transaction log to find the specific update to table x that would be a huge help.
Searching transaction logs for changes
Securitytransaction-log
Related Solutions
The transaction log is a important method to restore your database in a specific time. If you have a large database > 500 GB and if you must restore your database from a full backup, this will cost very much time. Also if you full backup your database everytime, think about how long this backup could take.
A very easy concept for SQL Server can be: Set Recover Model of your Database Full
Create a Maintenance Plan(1) in SQL Server:
- Do FullBackup every Week maybe in D:\yourbackup\FullDBBackup.bak
- Do Differential Backup every two days in D:\yourbackup\DiffBackup.bak
- Do every 2 houers Backup your Transaction log in D:\Yourbackup\Tranlogbackup.trn
Create a Maintenance Plan(2) in SQL Server:
- Delete all older Files 8 days from D:\yourBackup*.bak
- Delete all older Files 3 Days from D:\yourBackup*.trn
In this case, you are able to recover your Database in a specifc time, very fast very easy. SQL Server will automaticly manage your "Backup" files, older files will be deleted after your specific time range.
I would suggest that you read about SQL Server Transaction Log here:
http://www.sqlservercentral.com/articles/Design+and+Theory/63350/
For using the Maintenance Plans in SQL Server just ask BING / google :D
you should build a small test db and test this before you go in production
First of all I'd like to again point out that, in most cases, SIMPLE recovery is not recommended for a production database. However, if that is something you wish to change the best way I've found to shrink the LOG file, after you put the database into SIMPLE recovery model, is as follows. This will not only accomplish releasing free space from the log, but also shrinking the file down to its smallest possible size.
As you stated, shrinking also is not normally a good practice, but there are times when it is warranted. There are many articles pro/con about it, and I agree with those that will tell you to never use DBCC SHRINKDATABASE;
always use DBCC SHRINKFILE
if you absolutely have to do a shrink.
However, the log file does indeed have to have a reasonable size to it, even in the SIMPLE recovery model, as that is where information is kept during transactions and long running SPIDs need more of a log file. Among many reasons, one of those is for roll-back in the event of a problem. As I stated above, these scripts will reduce your log file to its absolute smallest size possible, but because you need at least some log space you should follow up by regrowing it manually and also make sure your "auto grow" is set to a reasonable number; arguably, a percentage is to always be avoided. Shrinking the log to its smallest size and the regrowing it manually accomplishes a very important process - that of keeping your VLF fragmentation/counts down. That's another topic I wont go into here, and many sites including Brent Ozar can go into the actual statistics and testing showing why, but always grow your log file by 8 GB per growth if it needs to be that large or larger. You mentioned above 25 MB and I can't imagine that being large enough. I personally manage over a hundred databases at the moment, and less than a dozen are set to 64 MB - All others are much larger ranging up to 64 GB. Your actual SQL activity and queries will determine what your optimal size should be. At minimum I'd go 512 MB to 2 GB and set your growth to the same until you know what your log file should sit at.
I find running each line separately works out the best, ensuring success before going to the next step. The final line of course is what regrows your log to the size you set. I have it at 8 GB, but you can change that to fit your needs. If you do not at this time want to fully shrink and then regrow remove the last line and change the "1" in the line above to your size choice.
One last thing before you go out and shrink your log file.
IMPORTANT: Any time a log file grows, manually or automatically by SQL, essentially your entire database is locked until the grow is done. Pick a good time when doing that won't impact active transactions - if you have "after hours" or maintenance windows. Such a small size of 1 GB or less should be fairly fast and invisible to the end user, but that will truly be determined by your infrastructure.
USE [DatabaseName]
DBCC SHRINKFILE (N'LogicalName', 0, TRUNCATEONLY)
DBCC SHRINKFILE (N'LogicalName', 1)
ALTER DATABASE [DatabaseName] MODIFY FILE ( NAME = N'LogicalName', SIZE = 8192 MB )
Related Question
- Sql-server – Heavy I/O for Microsoft Transaction Log
- Sql-server – Transaction log uses (not backups of transaction logs)
- Sql-server – Unexpected Transaction Log Flush events and LOG_BACKUP errors
- How to justify making changes to the redo logs
- Sql-server – Not able to delete transaction log backup file
- Sql-server – Maximum size transaction log and impact
- SQL Server – Reasons for Shrinking Transaction Log Before and After Backup
Best Answer
If you're using MS SQL Server you're much better off downloading a tool like this instead of trying to figure out how to go through the entire transaction log and use those functions. It has a 14 day free trial and should make it a lot more readable for you.
If you end up having to do that check out this thread on DBCC LOG: