It looks like SQL Server is generating a parameterized query plan that can work for any value of @CustomerPartitionKey. In order to do so, it seems to treat @CustomerPartitionKey as both a partition and a column you are seeking upon. If we take a look at the query plan where we have the bad estimate (3000 rows estimated, 300000 actual), we see that there are actually two separate seek predicates on part.Contact
related to @CustomerPartitionKey
:
Seek Keys[1]: Prefix: PtnId1004, [Test].[part].[Contact].CustomerPartitionKey = Scalar Operator([Expr1008]), Scalar Operator([@CustomerPartitionKey])
I think that the latter ([Test].[part].[Contact].CustomerPartitionKey = Scalar Operator([@CustomerPartitionKey])
is able to get a proper estimate based on parameter sniffing for the value of @CustomerPartitionKey
. However, the former (Prefix: PtnId1004 = Scalar Operator([Expr1008])
) is likely not able to do so, perhaps because Expr1008
is complicated expression that processes partition elimiation: [Expr1008]=RangePartitionNew([@CustomerPartitionKey],(0),(0),(1),(2),...,(97),(98))
.
In this case, there are 100 partitions and the row estimate is exactly 100 times too low because SQL Server isn't able to process the partition elimination in the same smart way that it processes the actual seek on the column and uses an estimate for the runtime parameter value of 3. This theory is supported by the way that the estimated rows varies if you remove partitions; if you use 90 partitions instead, the estimate will be 3333.33 (300000 / 90).
In our own queries, we typically use a literal (e.g., 3
in this case) or use OPTION RECOMPILE whenever we are writing a query that is going to take advantage of partition elimination. This practice has worked fairly well for us given that the number of queries on the system is modest and query compilation overhead for queries against large partitioned tables is not a concern for us. Not necessarily a satisfying answer, but it might work for you.
Best Answer
In my experience there are a couple of things to consider:
So consider your situation in regards to CPU, RAM and storage capacities. Don't forget to consider columnstore indexes which offer both good compression and query performance.