Your column is of type CHAR(6)
. Your filter is of type VARCHAR
(perhaps counter intuitive, but that what the constant literal '200706'
is). The rules of Data Type Precedence dictate that the comparison occurs using the type with higher precedence, ie. the VARCHAR type. This is a different type than the partitioning function, therefore you do not get partition elimination.
Try this instead:
select count(*)
from T (nolock)
where Month = CAST('200706' as CHAR(6));
Your plan should get a partitioning elimination filter.
The fact that you are comparing it against an integer
variable is irrelevant.
The plan for COUNT
always has an CONVERT_IMPLICIT(int,[ExprNNNN],0))
where ExprNNNN
is the label for the expression representing the result of the COUNT
.
My assumption has always been that the code for COUNT
just ends up calling the same code as COUNT_BIG
and the cast is necessary to convert the bigint
result of that back down to int
.
In fact COUNT_BIG(*)
isn't even distinguished in the query plan from COUNT(*)
. Both show up as Scalar Operator(Count(*))
.
COUNT_BIG(nullable_column)
does get distinguished in the execution plan from COUNT(nullable_column)
but the latter still gets an implicit cast back down to int
.
Some evidence that this is the case is below.
WITH
E1(N) AS
(
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1
) -- 1*10^1 or 10 rows
, E2(N) AS (SELECT 1 FROM E1 a, E1 b) -- 1*10^2 or 100 rows
, E4(N) AS (SELECT 1 FROM E2 a, E2 b) -- 1*10^4 or 10,000 rows
, E8(N) AS (SELECT 1 FROM E4 a, E4 b) -- 1*10^8 or 100,000,000 rows
, E16(N) AS (SELECT 1 FROM E8 a, E8 b) -- 1*10^16 or 10,000,000,000,000,000 rows
, T(N) AS (SELECT TOP (2150000000)
ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS N FROM E16)
SELECT COUNT(CASE WHEN N < 2150000000 THEN 1 END)
FROM T
OPTION (MAXDOP 1)
This takes about 7 minutes to run on my desktop and returns the following
Msg 8115, Level 16, State 2, Line 1
Arithmetic overflow error converting expression to data type int.
Warning: Null value is eliminated by an aggregate or other SET operation.
Which indicates that the COUNT
must have continued on after an int
would have overflowed (at 2147483647) and the last row (2150000000) was processed by the COUNT
operator leading to the message about NULL
being returned.
By way of comparison replacing the COUNT
expression with SUM(CASE WHEN N < 2150000000 THEN 1 END)
returns
Msg 8115, Level 16, State 2, Line 1
Arithmetic overflow error converting expression to data type int.
with no ANSI
warning about NULL
. From which I conclude the overflow happened in this case during the aggregation itself before row 2,150,000,000 was reached.
Best Answer
It is the parameter that has been implicitly converted, not the column.
The query has been subject to Simple Parameterization by SQL Server. You have no control over the datatypes used in this process. It uses the smallest datatype that can hold the literal value (
5
can fit into atinyint
). The implicit cast of atinyint
parameter to anint
won't cause any problems.However to avoid having multiple plans in cache for
int
,smallint
,tinyint
and get rid of the implicit cast you could explicitly parameterize the query yourself - with a parameter of datatypeint
rather than having it be parameterized automatically.One other alternative would be to block simple parameterisation by adding a redundant
AND 1=1
as below. But I don't recommend this as then you will get plans compiled and cached for every different literal value that you pass.