A developer said I should simply wrap all my database upgrade scripts in a transaction. If it fails, just roll back all changes. All my instincts tell me this is wrong, especially when it comes to dealing with large volumes of data and/or procedures and functions.
I typically hand-hold the upgrade process on single instance databases as follows:
- Negotiate a maintenance window
- Prepare upgrade scripts
- Put database into restricted user mode
- Disable scheduled jobs/processes that would normally hit the database
during this window - Perform a full backup
- Apply the upgrade scripts
- Have developer or test team confirm the app operates as expected
- Put database back into multi-user mode
- Release the database for normal use
When it comes to rolling out changes to several hundred instances, however, I have changed my process as follows:
-
I make the upgrade scripts much more robust: they can be run multiple times on the same server without harm, database version numbers are repected, scripts will terminate if they are run against the run version, etc.
-
spawn a process for each server (using powershell, osql, etc)
- run the appropriate upgrade script
- report success or failure
Best Answer
There's no standard process because every system is different. About the last thing that I would do if just wrap everything in a single transaction. What happens if I need to move 500 Gigs of data around? That's one massive transaction.
Recently I've been using database snapshots as my rollback.
Basically take a snapshot, make the changes. Delete the snapshot after signoff. If upgrade failed roll back the snapshot and then try again.
It's a lot quicker to roll back a snapshot than to restore the database (assuming that the database is large).