Sql-server – Query slower after upgrade from sql server 2008R2 to 2014sp2

cardinality-estimatesperformancequery-performancesql serversql server 2014

I recently upgraded from SQL Server 2008R2 to 2014SP2 and upgraded the databases to the compatiblity level 120.

This query generates a better execution plan with the old cardinality estimator (traceflag 9481 enabled): https://www.brentozar.com/pastetheplan/?id=Hy9skV1Ox
It only uses 269 logical reads on object2 and 3 on object3.

With the new cardinality estimator i get this execution plan: https://www.brentozar.com/pastetheplan/?id=BJAnpXk_e
Unfortunately this results in 269 logical reads on object2 and 68688 on object3.

The server is configured for adhoc workload.
Even enabling the traceflag 4199 doesn't improve the estimates/execution plan

Why does the new cardinality estimator generate such a worse plan?
Why are the estimates so far off?
What could i do to improve it besides disabling the new estimator or using query hints?

Best Answer

I understand your disappointment with the query plan regressions that you experienced. However, Microsoft changed some core assumptions about the cardinality estimator. They could not avoid some query plan regression. To quote Juergen Thomas:

However to state it pretty clearly as well, it was NOT a goal to avoid any regressions compared to the existing CE. The new SQL Server 2014 CE is NOT integrated following the principals of QFEs. This means our expectation is that the new SQL Server 2014 CE will create better plans for many queries, especially complex queries, but will also result in worse plans for some queries than the old CE resulted in.

To answer your first question, the optimizer appears to pick a worse plan with the new CE because of a 1 row cardinality estimate from Object2. This makes a nested loop join very attractive to the optimizer. However, the actual number of rows returned from Object2 was 34182. This means that the estimated cost for the nested loop plan was an underestimate by about 30000X.

The legacy CE gives a 208.733 cardinality estimate from Object2. This is still very far off, but it's enough to give a plan that uses a merge join a lower estimated cost than a nested loop join plan. SQL Server gave the nonclustered index seek on Object3 a cost of 0.0032831. With a nested loop plan under the legacy CE, we could expect a total cost for 208 index seeks to be about 0.0032831 * 208.733 = 0.68529 which is much higher than the final estimated subtree cost for the merge join plan, 0.0171922.

To answer your second question, the cardinality estimate formulas for a query as simple as yours are actually published by Microsoft. I recommend referencing the excellent white paper on differences between the legacy and new CE found here. Focus on why the cardinality estimates are 1 for the new CE and 208.733 for the legacy CE. That's unexpected because the legacy CE assumes independence of filters but the new CE uses exponential backoff. In general for such a query I would expect the new CE to give a larger cardinality estimate for Object2. You should be able to figure what's going on by looking at the statistics on Object2.

To answer your third question, we can get general strategies from the white paper. The following is an abbreviated quote:

  • Retain the new CE setting if specific queries still benefit, and “design around” performance issues using alternative methods.
  • Retain the new CE, and use trace flag 9481 for those queries that had performance degradations directly caused by the new CE.
  • Revert to an older database compatibility level, and use trace flag 2312 for queries that had performance improvements using the new CE.
  • Use fundamental cardinality estimate skew troubleshooting methods.
  • Revert to the legacy CE entirely.

For your problem in particular, first I would focus on the statistics. It's not clear to me why the cardinality estimate for an index scan on Object3 would be so far off. I recommend updating statistics with FULLSCAN on all of the involved objects and indexes before doing more tests. Updating the statistics again after changing the CE is also a good step. You should be able to use the white paper to figure out exactly why you're seeing the cardinality estimates that you're seeing.

I can give you more detailed help if you provide more information. I understand wanting to protect your IP but what you have there is a pretty simple query. Can you change the table and column names and provide the exact query text, relevant table DDL, index DDL, and information about the statistics?

If all else fails and you need to fix it without hints or trace flags, you could try updating to SQL Server 2016 or changing the indexes on your tables. You're unlikely to get the bad nested loop plan if you remove the index, but of course removing an index could affect other queries in a negative way.