Everything is wrong with using backup files as deployment. But the burden is not on the DBA to provide the DDL, is on development. Your design and development artifacts should had been database installation and upgrade scripts. Never change anything in the database manually, everything should be modified using your app. Rails gets this in spades with he whole migrations infrastructure and you should try to adopt it too. I have for long advocated using similar techniques, see Version Control and your Database.
First let me make the case why source code based deployment/upgrade is superior to binary based deployment (.bak, or diff tools):
- source code can be checked in into source control. This alone should settle the whole argument. Source control gives yo history, a future you can look back and read the check in notes, understand the reasoning behind current state.
- source code can be quickly inspected at a glance. You look at it and read it. binary databases require to be attached and require extensive knowledge of the metadata catalogs to read even the basic properties
- source code is clean. You see
CREATE TABLE Foo (...)
, which clearly conveys the intent. Binary distribution, if you want to extract the object, showers you in a plethora of default properties. You loose the original intent.
- source code can be peer reviewed at check in.
- source code integrates in contiguous deployment
And I also have arguments why deploying by backup is bad (very bad):
- you only postponed the issue. The very first update of the app will face the issue of deploying an update without loosing the data. This can occur the next day after deployment, when an issue is noticed in production, and you will be left facing the void: how to modify the production DB to match the dev DB?
- Databases are not self contained. Deployment invokes objects outside DB (logins, SQL Agent jobs, maintenance plan etc) none of which can be deployed with a backup.
- You never know what you deployed. Forgotten tables left over during dev? Test data? Is very difficult to cleanup a database, but keeping your source code up to date and correct is natural.
I would debug this using a server-side trace. Essentially, the reason Profiler is problematic in production is because it is a client application. If you attempt to trace to much and SQL Server is trying to manage that communication between the server and the client, it can bog things down. So instead, you can script out the trace as a T-SQL script and run it on the server itself. It will capture the information to a trace file, then you can take that trace file elsewhere and review it.
To script the trace, open Profiler and select your events. Since you are concerned that statements may not be committed/completed, I would recommend
- RPC: Starting/Completed
- SP: Starting/Completed
- SQL: BatchStarting/BatchCompleted
Included relevant columns. Filter as necessary (such as database name, maybe duration)
Then click 'Run', but immediately stop. At this point, go to your toolbar and select: File->Export->Script Trace Definition->For SQL Server... This will ask you to save a script somewhere. Once you've saved it, open it.
Before you run it, specify a file name and path (replace InsertFileNameHere). Then execute to start the trace. The trace will continue to run until you stop it. To stop it, make sure you know the trace ID, which is returned when you run the script. If you can't remember it, look at sys.traces
. Once you have the trace ID, you can stop it by using sp_settracestatus to set the status to 0 to stop it. Once it is set to 0, you can set it to 2 to delete it.
After stopping it, you should have a trace file to review and identify what's going on with your processes. The trace file is viewable through the Profiler application.
Best Answer
If you are looking for missing indexes inside your plan cache, you can try this:
You can find more info in the plan cache as well, I've listed a few examples here: http://sqlmag.com/database-performance-tuning/sql-server-plan-cache-junk-drawer-your-queries