Your question can be basically rephrased as 'How does the query memory grant work?'. A good read on the subject is Understanding SQL server memory grant. Before a query is launched into execution it may require a memory grant for sorts and hashes and other memory hungry operations. This memory grant is an estimate. Based on current system state (number of requests running and pending, memory available etc) the system grants the query a memory grant up to the required amount. Once the memory is granted, the query starts execution (it may have to wait in the dreaded 'resource semaphore' queue before it gets the grant). At execution it memory grant is guaranteed by the system. This amount of memory can be shared with data pages (since they can always flush to disk) but never with other memory usage (ie. it cannot be subject 'steal'). So when the query starts asking for committed memory from its grant, the engine will deploy what you call 'strategy #1': data pages may be evicted (flushed if dirty) in order to give the query the memory it was promised. Now if the estimate was correct and the grant was 100% of the requested memory, the query should not 'spill'. But if the estimate was incorrect (boils down to cardinality estimates, therefore is subject to stale stats) or if the query did not got the entire grant it had asked for, the query will 'spill'. This is when tempdb comes into picture and performance usually tanks.
The only knob you have at your disposal that controls something in this process is the Resource Governor. Since the RG can be used to specify a MIN setting for a pool, it can be used to reserve memory for a certain workload so that it actually gets the memory grant it requests. Of course, after you did the proper investigation that shows that reduced memory grants are the culprit, and of course after the impact on other workloads was evaluated. And tested, of course.
Now lets go back to your original question. If your investigation is correct (a very big if) I would like to point out two problems:
- you run in production queries that require memory grants for a web site. This is a big no-no. Memory grants are indicative of analytical queries which have no place in serving HTTP requests.
- your queries are probably not event getting the memory grant they request. Again, even more of a no-no for a latency critical workload as web sites are.
So what that tells me is that you have a fundamental design and architectural problem. Web sites are latency driven and should create an OLTP like workload, with no memory grants and with no memory pressure on queries. Not to mention no spills. Analytical queries should be run in offline jobs and store the pre-processed results for quick availability when HTTP requests desire them.
I suggest a different tack altogether. Instead of naming 18,000 parameters why not make use of table-valued parameters? I'm making some leaps here about what exactly you're using all these parameters for (since you so handily anonymized them for us :-)), but if you create these types:
CREATE TYPE dbo.VarcharParameters AS TABLE
(
ParamName SYSNAME,
ParamValue VARCHAR(100)
);
CREATE TYPE dbo.BitParameters AS TABLE
(
ParamName SYSNAME,
ParamValue BIT
);
Then change the procedure as follows (please note the comments inline about how to deal with stuff in the TVPs):
CREATE PROCEDURE dbo.ObviouslyAnonymizedProcedure2
@SchemaID INT = NULL,
@TypeDesc NVARCHAR(60) = NULL,
@VCParams dbo.VarcharParameters READONLY,
@BitParams dbo.BitParameters READONLY,
@paramStartRow INT,
@paramMaxRows INT
AS
BEGIN
SET NOCOUNT ON;
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
DECLARE
@sql NVARCHAR(MAX) = N'',
@From NVARCHAR(MAX) = N'',
@Where NVARCHAR(MAX) = N'',
@LF CHAR(2) = CHAR(13) + CHAR(10),
@Tab CHAR(1) = CHAR(9),
@FLOuter NVARCHAR(MAX),
@FLInner NVARCHAR(MAX);
DECLARE @LFTab CHAR(3) = @LF + @Tab;
SET @FLOuter = @LFTab + ' t1.name'
+ @LFTab + ', t1.object_id'
+ @LFTab + ', SCHEMA_NAME(t1.schema_id) AS schema_name'
+ @LFTab + ', t1.type_desc'
+ @LF;
SET @FLInner = @LFTab + ' t0.name'
+ @LFTab + ', t0.object_id'
+ @LFTab + ', t0.schema_id'
+ @LFTab + ', t0.type_desc'
+ @LF;
SET @From = N' From sys.objects as t0 with(nolock) ' + @LF;
IF @SchemaId IS NOT NULL
BEGIN
SET @Where = @Where + ' AND t0.schema_id = @SchemaId';
END
IF @TypeDesc IS NOT NULL
BEGIN
SET @Where = @Where + ' AND t0.type_desc = @TypeDesc'
END
-- obviously you need a bunch more of these, and I'm making
-- a half-educated guess about how the bit params are used:
IF EXISTS (SELECT 1 FROM @BitParams WHERE ParamName = 'paramIsView' AND ParamValue = 1)
BEGIN
SET @Where += ' AND t0.type_desc = ''VIEW'''
END
-- and I'm not clear exactly what you're doing with the varchar params,
-- but if you give some more clues I'm sure we can work that out too.
-- It may be very simple to build a string from those, without having to
-- reference every single one of them by name, depending on what they do.
SET @sql = 'SELECT ' + @FLOuter + ' FROM ( SELECT ROW_NUMBER() OVER
(ORDER BY t0.[object_id]) AS rn, ' + @FLInner +
@From + ' WHERE 1 = 1 ' + @Where + ') AS t1
WHERE t1.rn BETWEEN @paramStartRow + 1
AND @paramStartRow + @paramMaxRows ORDER BY rn;'
EXEC sp_executesql @sql,
N'@SchemaId INT,@TypeDesc NVARCHAR(60),@paramStartRow INT,@paramMaxRows INT',
@SchemaID, @TypeDesc, @paramStartRow, @paramMaxRows;
END
GO
Now you can call it like this:
DECLARE @x dbo.VarcharParameters;
INSERT @x VALUES
('paramFoo', 'wuzzuh'),
('paramGamma', 'foobar');
DECLARE @y dbo.BitParameters;
INSERT @y VALUES
('paramIsView', 0),
('paramIsTable', 0);
EXEC dbo.ObviouslyAnonymizedProcedure2
@SchemaId = 1,
@TypeDesc = NULL,
@VCParams = @x,
@BitParams = @y,
@paramStartRow = 1,
@ParamMaxRows = 20;
I won't show my results, because they'll be different from yours, but I bet the massive reduction in parameters will eliminate the compilation problems you have.
Also this is how you call this procedure from T-SQL; in order to call it from, say, C#, you'll need to use a DataTable or List or something compatible. I have an example here.
This is also much more flexible in terms of adding new parameters - you don't have to change the interface to the stored procedure, just add them to the procedure body (where relevant) and to the code that populates the data table.
Now just fill us in on what all the varchar parameters do and you might be one step closer to a solution. :-)
Best Answer
First, some background information
So far (until SQL Server 2014) the query optimiser will only take a few things into account:
CPU cores: The core count assigned to SQL Server from the affinity mask determines how efficient it can possibly be to create a parallel plan. For example, on a system with 4 cores, you can expect a 4x speedup when switching to a parallel plan, whereas a system with 2 cores only get a doubling. Because of this, the core count can affect the choice of plan. Unfortunately, not every query can scale linearly with the core count, yet the optimiser will believe it does. This means that on machines with more than around 12-16 cores, getting more parallelism will actually SLOW the query down. The speed of the CPU is not taken into account, only the number of cores.
Memory Available: When the plan is made, the amount of memory available is taken into account. This determines strategies like hash joins, sort space and other memory intensive operations. Mis-estimation here is very dangerous and can lead to poor performance. Especially if you over estimate the memory available for a hash join and have to spill into
tempdb
.Your specific case
Without measurements of your system it is difficult, if not impossible, to know exactly what happened in your scenario. There are too many variables that potentially changed at the same time. It may simply be that something else has changed in the environment. Any diagnosis is pure guesswork and real DBA work is a science, not an arty exercise in guessing.
You would need to collect the following to get knowledge.