At least in this example, there's an easier approach than this. Remember that the optimizer is always trying to plan the query execution in a way that the path involving least amount of work necessary to retrieve a valid result set will be the path chosen.
SELECT * FROM t1 WHERE (col1 = 'foo') OR (1 = 1);
The server will always return all of the rows, because 1 = 1 contains all the truthiness required to conclude "yes, this row matches the WHERE
clause"... and it will not bother scanning the values in col1 to see if they contain 'foo' because the optimizer has found a more obvious, simpler, "lower cost" approach... (any expression) OR (TRUE)
is always true... and the shortest path to truth is the one the optimizer is supposed to find and take. It doesn't need to evaluate whether 1 = 1
row-by-row, since that's a constant expression.
You don't even need a prepared statement, just a query in the procedure:
SELECT id, bar
FROM table
WHERE baz = mandatory
AND (optional IS NULL OR field = optional)
AND (optional2 IS NULL OR field2 = optional2);
If the program variable "optional" is null, the optimizer realizes that entire expression (in parentheses on that line) will be true for each row, no matter what, since, in the context of the running query, "optional" is a constant value, and optional is NULL
is always going to be true, just like 1 = 1
is always true. The optimizer realizes that this also is a constant expression. The contents of "field" never need be evaluated, because the OR
expression has been determined in advance to be true. The same goes for optional2.
On the flip side, if "optional" is not null, then optional IS NULL
gets optimized away, since it's impossible for that expression to be true, and you are left with AND (field = optional)
. Same, again, for "optional2."
The optimizer is left, in each case, with query that can still use indexes as appropriate.
Since the query is in the procedure body, not a prepared statement, SQL injection is impossible since the server has no chance of blurring the distinction between data in variables and the literal query.
The next to last major step was to find out that it is, unfortunately for MySQL users, only MySQL that joins slowly, so these write intensive tables were all given unique integer identifiers to join upon. That took off 1/3 of the remaining average time consumption reduction.
The last step was taking CTE optimization fencing into account which took off the last 2/3 of reduction.
In the case of the above set of queries, the average time consumed is now between 1 and 2ms on an i7 laptop tuned for SSDs even though I'm using the included hard drive. On a server with a single SSD, the time consumed averages less than 1ms.
To get the last performance boost, almost all queries were condensed, so for example if a table needed to be updated, it's best to do it in one query no matter how strange or performance reducing it might appear.
Reading to assemble relevant primary key values required more testing. If it was a combination of RETURNINGs, it was best to allow each sub-statement to recalculate it rather than depend upon another sub-statement to do the calculation. If the read had to go to disk, even if it was small, it was best to do that once and reference it across the multiple writing queries.
Copying was another grey area. If a table required inserts, updates, and deletes based upon data from another table, it was slower to pre-select the data. It was faster to simply reference the primary key data.
In general, aside from reads that go to disk to assemble primary key values to reference, it is best to compress a CTE as much as possible though it may appear strange to keep the chains as short and narrow as possible.
Multi-prepared statement transactions that previously consumed 10s of ms in my application now consume max 5ms.
This approach should probably be limited to small amounts of redundant data.
Best Answer
Prepared statements are supported in SQL Server, but you are probably having a hard time finding examples of using them because they aren't really necessary.
You could call sp_prepare to get a handle and then pass that to sp_execute. Or you can call either sp_prepexec or sp_prepexecrpc, both of which combine the "prepare" and "exec" steps. In all cases, you would end by calling (eventually) sp_unprepare, passing in the handle returned by
sp_prepare
,sp_prepexec
, andsp_prepexecrpc
.However, as stated previously, this pattern is obsolete / unnecessary in Microsoft SQL Server as queries and execution plans are naturally cached. Queries are cached, and looked up, based on a hash of the query text. Ad hoc queries are cached based on the exact, full text of the query (binary comparison), including values concatenated into the query to use for filtering, etc. Those plans are less likely to get re-used, unless the exact same query (with respect to white-space and everything else) is executed again. Parameterized queries are cached based on the query text which does not include the parameter values, and can be re-used for different values of the parameters -- just like when doing a "prepare". Hence, no need to call prepare / exec / unprepare.
If you execute queries using Stored Procedures,
sp_executesql
(for Dynamic SQL), or a properly parameterized query from app code, then SQL Server is already doing a lookup to see if that parameterized query has already been parsed and compiled (i.e. "prepared") and if so, then it will use the cached info.I believe that the most wide-spread usage of this pattern in SQL Server is by ODBC drivers. When writing an application that can connect natively to SQL Server, one would not employ this pattern. And in fact, I would recommend against using this pattern (whenever possible) as it has the downside of taking up more memory per each Session: it stores the query definitions so that the execution plan can be regenerated in cases where the plan was, for whatever reason, removed from the plan cache. For more details on this behavior, including sample .NET / C# code, please see my other answer here on DBA.StackExchange:
What is the sense and benefit of using SqlCommand.Prepare()?