Dynamic SQL vs. SQLCMD – User-Shared Queries Comparison

dynamic-sqlscriptingsql serversqlcmd

I have to refactor and document a number of foo.sql queries which will be shared by a team of DB tech support (for customer configurations and things like that). There are types of tickets that come regularly where each customer have their own servers and databases, but otherwise the schema are the same across the board.

Stored procedures are not an option at the current time. I'm debating whether to use dynamic or SQLCMD, I have not used much of either as I am a bit new at SQL Server.

SQLCMD scripting I feel definitely "looks" cleaner to me, and easier to read and make small changes to the queries as needed, but also forces the user to enable SQLCMD mode. Dynamic is more difficult as the syntax highlighting is loss due to query being written using string manipulation.

These are being edited and ran using Management Studio 2012, SQL version 2008R2. What are some of the pros/cons of either method, or some of the SQL Server "best practices" on one method or the other? Is one of them "safer" than the other?

Dynamic example:

declare @ServerName varchar(50) = 'REDACTED';
declare @DatabaseName varchar(50) = 'REDACTED';
declare @OrderIdsSeparatedByCommas varchar(max) = '597336, 595764, 594594';

declare @sql_OrderCheckQuery varchar(max) = ('
use {@DatabaseName};
select 
    -- stuff
from 
    {@ServerName}.{@DatabaseName}.[dbo].[client_orders]
        as "Order"
    inner join {@ServerName}.{@DatabaseName}.[dbo].[vendor_client_orders]
        as "VendOrder" on "Order".o_id = "VendOrder".vco_oid
where "VendOrder".vco_oid in ({@OrderIdsSeparatedByCommas});
');
set @sql_OrderCheckQuery = replace( @sql_OrderCheckQuery, '{@ServerName}',   quotename(@ServerName)   );
set @sql_OrderCheckQuery = replace( @sql_OrderCheckQuery, '{@DatabaseName}', quotename(@DatabaseName) );
set @sql_OrderCheckQuery = replace( @sql_OrderCheckQuery, '{@OrderIdsSeparatedByCommas}', @OrderIdsSeparatedByCommas );
print   (@sql_OrderCheckQuery); -- For debugging purposes.
execute (@sql_OrderCheckQuery);

SQLCMD example:

:setvar ServerName "[REDACTED]";
:setvar DatabaseName "[REDACTED]";
:setvar OrderIdsSeparatedByCommas "597336, 595764, 594594"

use $(DatabaseName)
select 
    --stuff
from 
    $(ServerName).$(DatabaseName).[dbo].[client_orders]
        as "Order"
    inner join $(ServerName).$(DatabaseName).[dbo].[vendor_client_orders]
        as "VendOrder" on "Order".o_id = "VendOrder".vco_oid
where "VendOrder".vco_oid in ($(OrderIdsSeparatedByCommas));

Best Answer

Just to get these out of the way:

  • Technically speaking, both of these options are "dynamic" / ad hoc queries that are not parsed / validated until they are submitted. And both are susceptible to SQL Injection since they are not parameterized (though with the SQLCMD scripts, if you are passing in a variable from a CMD script then you do have an opportunity to replace ' with '', which may or may not work depending on where the variables are being used).

  • There are pros and cons to each approach:

    • SQL scripts in SSMS can be easily edited (which is great if that is a requirement) and working with results is easier than with output from SQLCMD. On the down-side, the user is in an IDE so it is easy to mess up the SQL, and the IDE makes it easy to make a great variety of change without knowing the SQL to do it.
    • Running scripts via SQLCMD.EXE does not allow the user to easily make changes (without editing the script in an editor and then saving it first). This is great if the users aren't supposed to be changing the scripts. This method also allows for logging each execution of it. On the down-side, if there is a need to routinely edit the scripts then it would be quite cumbersome. Or, if users need to scan through 100k rows of a result set and/or copy those results to Excel or something, then that is also difficult in this approach.

If your support folks aren't doing ad hoc queries and are just filling out those variables, then they don't need to be in SSMS where they can edit those scripts and make unwanted changes.

I would create CMD scripts to prompt the user for the desired variable values and then call SQLCMD.EXE with those values. The CMD script could even log the execution to a file, complete with time stamp and variable values submitted.

Create one CMD script per SQL script and place in a networked shared folder. A user double-clicks on the CMD script and it just works.

Here is an example that:

  • prompts the user for the server name (no error checking on that yet)
  • prompts the user for the database name
    • if left blank, it will list the databases on the specified server and prompted again
    • if the database name is invalid, the user will be prompted again
  • prompts the user for the OrderIDsSeparatedByCommas
    • if blank, prompts the user again
  • runs the SQL script, passing in the value of %OrderIDsSeparatedByCommas% as the SQLCMD variable $(OrderIDsSeparatedByCommas)
  • logs the execution date, time, ServerName, DatabaseName, and OrderIDsSeparatedByCommas to a log file named for the Windows Login running the script (this way, if the log directory is network and there are multiple people using this, there won't be any write contention on the log file like there might be if the USERNAME were to be logged in the file per entry)
    • if the log file directory does not exists, it will be created

Test SQL script (named: FixProblemX.sql):

SELECT  *
FROM    sys.objects
WHERE   [schema_id] IN ($(OrderIdsSeparatedByCommas));

CMD script (named: FixProblemX.cmd):

@ECHO OFF
SETLOCAL ENABLEDELAYEDEXPANSION

SET ScriptLogPath=\\server\share\RunSqlCmdScripts\LogFiles

CLS

SET /P ScriptServerName=Please enter in a Server Name (leave blank to exit): 

IF "%ScriptServerName%" == "" GOTO :ThisIsTheEnd

REM echo %ScriptServerName%

:RequestDatabaseName
ECHO.
SET /P ScriptDatabaseName=Please enter in a Database Name (leave blank to list DBs on %ScriptServerName%): 

IF "%ScriptDatabaseName%" == "" GOTO :GetDatabaseNames

SQLCMD -b -E -W -h-1 -r0 -S %ScriptServerName% -Q "SET NOCOUNT ON; IF (NOT EXISTS(SELECT [name] FROM sys.databases WHERE [name] = N'%ScriptDatabaseName%')) RAISERROR('Invalid DB name!', 16, 1);" 2> nul

IF !ERRORLEVEL! GTR 0 (
    ECHO.
    ECHO That Database Name is invalid. Please try again.

    SET ScriptDatabaseName=
    GOTO :RequestDatabaseName
)

:RequestOrderIDs
ECHO.
SET /P OrderIdsSeparatedByCommas=Please enter in the OrderIDs (separate multiple IDs with commas): 

IF "%OrderIdsSeparatedByCommas%" == "" (

    ECHO.
    ECHO Don't play me like that. You gots ta enter in at least ONE lousy OrderID, right??
    GOTO :RequestOrderIDs
)


REM Finally run SQLCMD!!
SQLCMD -E -W -S %ScriptServerName% -d %ScriptDatabaseName% -i FixProblemX.sql -v OrderIdsSeparatedByCommas=%OrderIdsSeparatedByCommas%

REM Log this execution
SET ScriptLogFile=%ScriptLogPath%\%~n0_%USERNAME%.log
REM echo %ScriptLogFile%

IF NOT EXIST %ScriptLogPath% MKDIR %ScriptLogPath%

ECHO %DATE% %TIME% ServerName=%ScriptServerName%    DatabaseName=[%ScriptDatabaseName%] OrderIdsSeparatedByCommas=%OrderIdsSeparatedByCommas%   >> %ScriptLogFile%

GOTO :ThisIsTheEnd

:GetDatabaseNames
ECHO.
SQLCMD -E -W -h-1 -S %ScriptServerName% -Q "SET NOCOUNT ON; SELECT [name] FROM sys.databases ORDER BY [name];"
ECHO.
GOTO :RequestDatabaseName

:ThisIsTheEnd
PAUSE

Be sure to edit the ScriptLogPath variable towards the top of the script.

Also, the SQL scripts (specified by the -i command-line switch for SQLCMD.EXE) might benefit from having a fully-qualified path, but not entirely sure.