I have to refactor and document a number of foo.sql
queries which will be shared by a team of DB tech support (for customer configurations and things like that). There are types of tickets that come regularly where each customer have their own servers and databases, but otherwise the schema are the same across the board.
Stored procedures are not an option at the current time. I'm debating whether to use dynamic or SQLCMD, I have not used much of either as I am a bit new at SQL Server.
SQLCMD scripting I feel definitely "looks" cleaner to me, and easier to read and make small changes to the queries as needed, but also forces the user to enable SQLCMD mode. Dynamic is more difficult as the syntax highlighting is loss due to query being written using string manipulation.
These are being edited and ran using Management Studio 2012, SQL version 2008R2. What are some of the pros/cons of either method, or some of the SQL Server "best practices" on one method or the other? Is one of them "safer" than the other?
Dynamic example:
declare @ServerName varchar(50) = 'REDACTED';
declare @DatabaseName varchar(50) = 'REDACTED';
declare @OrderIdsSeparatedByCommas varchar(max) = '597336, 595764, 594594';
declare @sql_OrderCheckQuery varchar(max) = ('
use {@DatabaseName};
select
-- stuff
from
{@ServerName}.{@DatabaseName}.[dbo].[client_orders]
as "Order"
inner join {@ServerName}.{@DatabaseName}.[dbo].[vendor_client_orders]
as "VendOrder" on "Order".o_id = "VendOrder".vco_oid
where "VendOrder".vco_oid in ({@OrderIdsSeparatedByCommas});
');
set @sql_OrderCheckQuery = replace( @sql_OrderCheckQuery, '{@ServerName}', quotename(@ServerName) );
set @sql_OrderCheckQuery = replace( @sql_OrderCheckQuery, '{@DatabaseName}', quotename(@DatabaseName) );
set @sql_OrderCheckQuery = replace( @sql_OrderCheckQuery, '{@OrderIdsSeparatedByCommas}', @OrderIdsSeparatedByCommas );
print (@sql_OrderCheckQuery); -- For debugging purposes.
execute (@sql_OrderCheckQuery);
SQLCMD example:
:setvar ServerName "[REDACTED]";
:setvar DatabaseName "[REDACTED]";
:setvar OrderIdsSeparatedByCommas "597336, 595764, 594594"
use $(DatabaseName)
select
--stuff
from
$(ServerName).$(DatabaseName).[dbo].[client_orders]
as "Order"
inner join $(ServerName).$(DatabaseName).[dbo].[vendor_client_orders]
as "VendOrder" on "Order".o_id = "VendOrder".vco_oid
where "VendOrder".vco_oid in ($(OrderIdsSeparatedByCommas));
Best Answer
Just to get these out of the way:
Technically speaking, both of these options are "dynamic" / ad hoc queries that are not parsed / validated until they are submitted. And both are susceptible to SQL Injection since they are not parameterized (though with the SQLCMD scripts, if you are passing in a variable from a CMD script then you do have an opportunity to replace
'
with''
, which may or may not work depending on where the variables are being used).There are pros and cons to each approach:
If your support folks aren't doing ad hoc queries and are just filling out those variables, then they don't need to be in SSMS where they can edit those scripts and make unwanted changes.
I would create CMD scripts to prompt the user for the desired variable values and then call SQLCMD.EXE with those values. The CMD script could even log the execution to a file, complete with time stamp and variable values submitted.
Create one CMD script per SQL script and place in a networked shared folder. A user double-clicks on the CMD script and it just works.
Here is an example that:
%OrderIDsSeparatedByCommas%
as the SQLCMD variable$(OrderIDsSeparatedByCommas)
Test SQL script (named: FixProblemX.sql):
CMD script (named: FixProblemX.cmd):
Be sure to edit the
ScriptLogPath
variable towards the top of the script.Also, the SQL scripts (specified by the
-i
command-line switch for SQLCMD.EXE) might benefit from having a fully-qualified path, but not entirely sure.