Sql-server – Has the guidance on the setting for ‘cost threshold for parallelism’ changed with the advent of columnstore indexes

columnstoresql serversql server 2014

First off, what I'm not asking. I'm not asking what my setting should be.

Many are recommending upping the value past the default, and I certainly understand why that's the case for B-Tree based queries. But I've been reading about the (almost) linear scalability of in-memory clustered columnstore indexes, and I'm wondering if setting the cost threshold too high could cause SQL Server to starve columnstore-based queries for CPU cores.

So the question is this: Does SQL Server treat columnstore indexes differently when the 'cost threshold for parallelism' is concerned, and should that cause me to change my decision about what my initial setting should be?

Best Answer

Beyond the Cost Threshold setting, SQL Server appears to treat parallelism differently for columnstore indexes depending on your SQL Server version (2012 vs 2014) and even the datatypes in your table.

I'd start with Joe Chang's post benchmarking decimal vs float datatypes, and read the comments on that post as well. If you want to get exactly the right MAXDOP and Cost Threshold for Parallelism settings for your system, you'll need to perform the level of detailed testing that Joe does in his post, and that takes a lot of work. Because of that, I would focus on your system's primary bottleneck first - use wait stats to make sure parallelism or CPU pressure are problems for you, and then start by tuning the most CPU-intensive queries rather than making system settings changes.