The line from MSDN is talking about using EXEC()
, like this:
SET @sql = 'SELECT foo FROM dbo.bar WHERE x = ''' + @x + ''';';
EXEC(@sql);
In my testing, modern versions of SQL Server are still able to reuse a plan like this, but there may be other variables (such as version, or for example if you add conditional WHERE
clauses based on the presence of certain parameters - in which case that will generate a different plan).
If you use sp_executesql
then the parameter values can still cause parameter sniffing issues (just like with normal SQL), but this has nothing to do with whether SQL Server can re-use the plan. This plan will get used over and over again, just as if you hadn't used sp_executesql
at all, unless variables that would cause a direct query to get recompiled, in which case this one will get recompiled too (essentially, SQL Server doesn't store anything with the plan that says "this was executed from sp_executesql, but this one wasn't):
SET @sql = N'SELECT foo FROM dbo.bar WHERE x = @x;';
EXEC sys.sp_executesql @sql, N'@x varchar(32)', @x;
As a bonus, this has built-in protection against dynamic SQL, and avoids you having to worry about doubling up single quotes due to string delimiters. I blogged about some of this here and please read up on SQL injection here and here.
If you are having issues with plan re-use and/or parameter sniffing, some things you should look into are OPTION (RECOMPILE)
, OPTIMIZE FOR
, optimize for ad hoc workloads
and simple/forced parameterization
. I addressed a few similar questions in response to a recent webcast here, it may be worth a skim:
The gist is: don't be afraid to use sp_executesql
, but only use it when you need it, and only spend energy over-optimizing it when you have an actual performance issue. The example above is a terrible one because there's no reason to use dynamic SQL here - I've written this answer assuming you have a legitimate use case.
I love your approach to careful consideration to query tuning and reviewing options and plans. I wish more developers did this. One caution would be - always test with a lot of rows, looking at the logical reads, this is a smallish table. Try and generate a sample load and run the query again. One small issue - in your top query you are not asking for an order by, in your bottom query you are. You should compare and contrast them each with ordering.
I just quickly created a SalesOrders table with 200,000 sales orders in it - still not huge by any stretch of the imagination. And ran the queries with the ORDER BY in each. I also played with indexes a bit.
With no clustered index on OrderID, just a non-clustered index on CustID The second query outperformed. Especially with the order by included in each. There was twice as many reads on the first query than the second query, and the cost percentages were 67% / 33% between the queries.
With a clustered index on OrderID and a non-clustered index just on CustID They performed in a similar speed and the exact same number of reads.
So I would suggest you increase the number of rows and do some more testing. But my final analysis on your queries -
You may find them behaving more similarly than you realize when you increase the rows, so keep that caveat in mind and test that way.
If all you ever want to return is the maximum OrderID for each Customer, and you want to determine that by the OrderID being the greatest OrderID then the second query out of these two is the best way to go from my mindset - it is a bit simpler and while ever so slightly more expensive based on subtree cost it is a quicker and easier to decipher statement. If you intend on adding other columns into your result set someday? Then the first query allows you do to do that.
Updated:
One of your comments under your question was:
Please keep in mind, that finding the best query in this question is a
means of refining the techniques used for comparing them.
But best takeaway for doing that- test with more data - always makes sure you have data consistent with production and expected future production. Query plans start looking data when you give more rows to the tables, and try and keep the distribution what you'd expect in production. And pay attention to things like including Order By or not, here I don't think it makes a terrible bit of difference in the end, but still worth digging into.
Your approach of comparing this level of detail and data is a good one. Subtree costs are arbitrary and meaningless mostly, but still worth at least looking at for comparison between edits/changes or even between queries. Looking at the time statistics and the IO are quite important, as is looking at the plan for anything that feels out of place for the size of the data you are working with and what you are trying to do.
Best Answer
I don't know of a way to do this in the plan from Management Studio, but this is one of the many things the free SentryOne Plan Explorer will do for you when you generate an actual plan from within the tool - it includes all the runtime metrics per statement.