Parameter sniffing is your friend almost all of the time and you should write your queries so that it can be used. Parameter sniffing helps building the plan for you using the parameter values available when the query is compiled. The dark side of parameter sniffing is when the values used when compiling the query is not optimal for the queries to come.
The query in a stored procedure is compiled when the stored procedure is executed, not when the query is executed so the values that SQL Server has to deal with here...
CREATE PROCEDURE WeeklyProc(@endDate DATE)
AS
BEGIN
DECLARE @startDate DATE = DATEADD(DAY, -6, @endDate)
SELECT
-- Stuff
FROM Sale
WHERE SaleDate BETWEEN @startDate AND @endDate
END
is a known value for @endDate
and an unknown value for @startDate
. That will leave SQL Server to guessing on 30% of the rows returned for the filter on @startDate
combined with whatever the statistics tells it for @endDate
. If you have a big table with a lot of rows that could give you a scan operation where you would benefit most from a seek.
Your wrapper procedure solution makes sure that SQL Server sees the values when DateRangeProc
is compiled so it can use known values for both @endDate
and @startDate
.
Both your dynamic queries leads to the same thing, the values are known at compile-time.
The one with a default null value is a bit special. The values known to SQL Server at compile-time is a known value for @endDate
and null
for @startDate
. Using a null
in a between will give you 0 rows but SQL Server always guess at 1 in those cases. That might be a good thing in this case but if you call the stored procedure with a large date interval where a scan would have been the best choice it may end up doing a bunch of seeks.
I left "Use the DATEADD() function directly" to the end of this answer because it is the one I would use and there is something strange with it as well.
First off, SQL Server does not call the function multiple times when it is used in the where clause. DATEADD is considered runtime constant.
And I would think that DATEADD
is evaluated when the query is compiled so that you would get a good estimate on the number of rows returned. But it is not so in this case.
SQL Server estimates based on the value in the parameter regardless of what you do with DATEADD
(tested on SQL Server 2012) so in your case the estimate will be the number of rows that is registered on @endDate
. Why it does that I don't know but it has to do with the use of the datatype DATE
. Shift to DATETIME
in the stored procedure and the table and the estimate will be accurate, meaning that DATEADD
is considered at compile time for DATETIME
not for DATE
.
So to summarize this rather lengthy answer I would recommend the wrapper procedure solution. It will always allow SQL Server to use the values provided when compiling the the query without the hassle of using dynamic SQL.
PS:
In comments you got two suggestions.
OPTION (OPTIMIZE FOR UNKNOWN)
will give you an estimate of 9% of rows returned and OPTION (RECOMPILE)
will make SQL Server see the parameter values since the query is recompiled every time.
This has been reported no less than four times (but all traces have been removed from the WayBack Machine since Connect was murdered). This one was closed as fixed:
- connect.microsoft.com/SQLServer/feedback/details/365876/
But that wasn't true. (Also look at the workarounds section - the workaround I suggested is not always going to be acceptable.)
This one was closed as by design / won't fix:
- connect.microsoft.com/SQLServer/feedback/details/581193/
These two are newer and still active:
- connect.microsoft.com/SQLServer/feedback/details/800919/ (now closed as Won't Fix)
- connect.microsoft.com/SQLServer/feedback/details/804365/ (now closed as By Design)
Until Microsoft can be convinced otherwise, you're going to have to find a workaround - just have all the types deployed before running your test, or break it up into multiple tests.
I will try to get confirmation from my contacts about what Umachandar meant by fixed in the earliest item, because obviously that conflicts with later statements.
UPDATE #1 (of, hopefully, exactly 2)
The original bug (that was closed as fixed) involved alias types, but not of type TABLE
. It was reported against SQL Server 2005, which obviously didn't have table types and TVPs. It seems UC reported that the bug with non-table alias types was fixed based on how they handle internal transactions, but it did not cover a similar scenario later introduced with table types. I am still waiting on confirmation of whether that original bug should have ever been closed as fixed; I have suggested that all four be closed as by design. This is partly because it is kind of how I expected it to work, and partly because I get the sense from UC that "fixing" it to work in a different way is extremely complex, could break backward compatibility, and would be helpful in a very limited number of use cases. Nothing against you or your use case, but outside of test scenarios I'm not inclined to believe there is much value in this actually working.
UPDATE #2
I've blogged about this issue:
http://www.sqlperformance.com/2013/11/t-sql-queries/single-tx-deadlock
Best Answer
Although you can't pass a datepart keyword as a parameter to a user-defined function, you can pass a string literal and perform the needed conversion. Below is an table-valued function example that leverages a numbers table to return an adhoc range of datetime values you can extend as needed.