Seeing that the recovery model is set to simple and msdn states that the simple recovery model does not support point-in-time recovery - Does this mean that I won't be able to use my transaction log backups to restore the database in a disaster to an hour before it happened?
Even taking a transaction log backup is not supported for databases using the SIMPLE
recovery model. This is a restriction of the database engine based on how this recovery model works, and the recovery features it doesn't support, as you mentioned.
A transaction log backup maintenance plan task automatically skips databases in SIMPLE
recovery to avoid causing errors.
Which backup should be done first, the database backup or the transaction log backups? Articles that I'm busy reading say I should do the database backup first and then the transaction log backup else I will get maintenance plan errors, but I'm currently first backing up my transaction logs and then data databases and I'm not getting any errors.
For the reasons I mentioned above, it won't matter for databases using SIMPLE
recovery, as they will be skipped by the transaction log backup task.
For databases in the other two recovery models, a full backup must exist before you start taking transaction log backups (just the first time), or you will get an error -- this is probably what the articles refer to.
Point-in-time recovery ability is normally driven by business need -- in other words, you determine how critical the data is and how much you can afford to lose, then set the appropriate recovery model to meet those needs, and finally create a backup solution.
Even though SIMPLE
recovery does not support point-in-time recovery, if an hour of data loss is okay, perhaps a differential backup solution could work for you. (There are far too many variables that go into developing this kind of solution to give you a complete picture with what was provided in the question.)
What you're seeing is blocking as a result of the DBCC SHRINKDB
operation. That is causing your database not to respond. Shrink operations move your data pages around on disk in order to create space within the data files to shrink. As a result, the engine must take either page or table locks in order to move the appropriate data pages. While these should be short term, they can block queries.
Additionally, shrink operations are file manipulations (just as adding, modifying, or removing data files) and can not run while a backup of any sort is executing. This is why you see your other error regarding file commands must be serialized.
To observe the blocking, you can run sp_who2 (or better yet, sp_whoisactive) while your DBCC SHRINKDB
is running. Identify the spid the command is running under and you should see it causing blocking (if there's enough activity).
Unfortunately, there's no way to avoid the blocking. This, along with other reasons, is why you should not perform any sort of automated shrink operation. The best plan for resolution is to remove any "Shrink Database" tasks from your maintenance plans.
Best practices are to avoid shrinking databases whenever possible, per Paul Randall's excellent article that was linked by JohnS. If you must shrink your files, you should use the following rules of thumb:
- Perform shrinking manually so you can monitor it's progress and manage as necessary. If it is interrupting user processes, you can kill the shrink with minimal impact. Interrupting a shrink will not "rollback" any of the page movement and you will retain any progress the shrink has already made.
- Keep your shrink increments to small values so as to minimize page movement and keep your maintenance/shrink period manageable.
Best Answer
The first thing you have to remember is that all transactions are logged to some degree. There are cases where an action can be minimally logged but there is still some entry in the log.
The recovery modes just change how often the space in the log is freed for re-use. In all cases the log space is freed up once it is no longer needed. In the case of SIMPLE recovery there is no backup required for the space to be freed. However as long as a transaction is active that space won't be freed. In your particular case you probably have some long running transactions (probably batch jobs) that are holding the space open and causing additional data to be added to the end of the log forcing it to grow.
If the logs are taking more space than you are comfortable with your best bet is going to be to examine any batch processes and try to shorten any transactions they are using.