SQL Server – Can Columnstore Cache Size Be Artificially Limited?

columnstoresql server

I am trying to evaluate the potential performance of a data science workload on a large-ish dataset (~200GB). We have had excellent results using columnstore indexes in SQL Server 2017.

However, the business is interested in cost savings, and I have pointed out to them that columnstore indexes are now available in Standard Edition, albeit with a memory limit of 32GB. It's possible to determine the current memory use by looking at DMV sys.dm_column_store_object_pool

I would like to know if it's possible to limit the columnstore-dedicated RAM allocation to evaluate how the workload performance changes.

I'm aware that it's possible to limit the total RAM usage in SQL Server, but that will not provide a real indication of the very specific columnstore limits.

I should clarify that since we are currently using Developer Edition, there are no resource limits at all.

Best Answer

If you have an MSDN license you can download and install SQL Server Standard Edition for Dev/Test.

Otherwise you can use an Azure Pay-As-You-Go instance for testing. See eg

SQL Server 2017 Standard on Windows Server 2016