Sql-server – a deterministic method for evaluating a sensible buffer pool size

sql serversql-server-2008-r2sql-server-2012

I'm trying to come up with a sane way to understand if the max server memory (mb) setting is appropriate (either should be lower, or higher, or stay the way it is). I am aware that max server memory (mb) should always be low enough to leave room for the operating system itself, etc.

The environment I'm looking at has several hundred servers; I need a reliable formula I can use to determine if the present size of the buffer pool is appropriate since RAM is costed per GB allocated to each server. The entire environment is virtualized, and "physical" RAM allocated to a VM can easily be changed up or down.

I have a particular SQL Server Instance I'm looking at now with a PLE of 1,100,052 seconds, which equates to 12.7 days (the amount of time the server has been up). The server has a max server memory setting of 2560MB (2.5GB), of which only 1380MB (1.3GB) is actually committed.

I've read several items including one by Jonathan Keheyias (post) and another by Paul Randal (post), and several others. Jonathan advocates monitoring for a PLE below 300 per 4GB of buffer pool as being too low. For the SQL Server Instance above, 300 * (2.5 / 4) = 187 results in a really really low target PLE below 300. This instance has 290GB of SQL Server data (not including log files), and is only used for integration testing. Assuming the last 12 days is representative of typical usage for this server, I'd say the max server memory (mb) setting could be lowered.

At the other end of the scale, I have another integration-test server with a PLE of 294, that has a max server memory (mb) setting of only 1GB. This server has only 224MB of SQL Server data not including logs, and is running some BizFlow databases. This server might benefit from a higher max server memory (mb) setting.

I'm thinking a good starting place for targets that might be assigned too much memory could include looking at:

SELECT 
    RamMB = physical_memory_in_bytes / 1048576
    , BufferPoolCommittedMB = bpool_committed * 8192E0 / 1048576
    , BufferPoolCommitTargetMB = bpool_commit_target * 8192E0 / 1048576
    , PercentOfDesiredSizeMB = CONVERT(INT,(CONVERT(DECIMAL(18,2),bpool_committed) 
                            / bpool_commit_target) * 100)
FROM sys.dm_os_sys_info;

If BufferPoolCommitTargetMB / BufferPoolCommittedMB is greater than 1, the server is not using the entire buffer pool. If the machine in question also has a PLE of greater than "x" then it might be a good candidate for a decrease in max server memory (mb).

Since the Buffer Manager:Lazy writes/sec performance counter tracks the number of times SQLOS has written pages out to disk between checkpoints due to memory pressure, this might be another good thing to look at.

DECLARE @WaitTime DATETIME;
SET @WaitTime = '00:00:15';
DECLARE @NumSeconds INT;
SET @NumSeconds = DATEDIFF(SECOND, 0, @WaitTime);
DECLARE @LazyWrites1 BIGINT;
DECLARE @LazyWrites2 BIGINT;

SELECT @LazyWrites1 = cntr_value 
FROM sys.dm_os_performance_counters dopc
WHERE (
        dopc.counter_name LIKE 'Lazy writes/sec%' COLLATE SQL_Latin1_General_CP1_CI_AS
    )
    AND dopc.object_name = 'MSSQL$' + CONVERT(VARCHAR(255),
               SERVERPROPERTY('InstanceName')) + ':Buffer Manager';

WAITFOR DELAY @WaitTime;

SELECT @LazyWrites2 = cntr_value 
FROM sys.dm_os_performance_counters dopc
WHERE (
        dopc.counter_name LIKE 'Lazy writes/sec%' COLLATE SQL_Latin1_General_CP1_CI_AS
    )
    AND dopc.object_name = 'MSSQL$' + CONVERT(VARCHAR(255),
               SERVERPROPERTY('InstanceName')) + ':Buffer Manager';

SELECT LazyWritesPerSecond = (@LazyWrites2 - @LazyWrites1) / @NumSeconds;

The above code assumes the server is under load during the 15 seconds it takes to run, otherwise it will report 0; which might be a misleading false-negative.

Should I also look at PAGELATCHIO_* wait stats or some other wait type as an indicator of memory pressure, or lack thereof?

My question is, how can I reliably determine a "good" target value for PLE and max server memory (mb)?

Best Answer

As you already know there is No general formula to calculate max server memory you can do some quick maths and reach to a value but still you would need help of Perfmon counters at last to monitor memory usage and change accordingly. I know below general formula and I use it as well. I learned this formula from This Link

For SQL Server 2005 to 2008 R2

Please note from SQL Server 2005 to 2008 R2 max server memory only controls buffer pool. So max server memory configuration is bit tedious here and involves few calculations

  1. Leave 2 G memory straight away for Windows OS.

  2. Of course system would have antivirus running. Please leave 1.5G for Antivirus. Please note Mcafee and SQL Server do not go hand in hand so make sure you leave enough for it. You can also check perfmon counter Perfmon Process-> Private bytes and Working Set to monitor memory usage by AV and other small applications running on SQL Server box

enter image description here

  1. Consider the memory requirements of the drivers/firmwares.You have to derive it based on memory requirements by drivers installed on the system. RAMMAP tool can help

  2. Consider the NonbPool (aka MTL or MTR) memory requirements by SQL Server.

    select  sum(multi_pages_kb)/1024 as multi_pages_mb from  sys.dm_os_memory_clerks
    

    + Max worker threads * 2MB

    + Memory for direct Windows allocations approximately 0 to 300 MB in most of the cases but you may have to increase it if there are many 3 party components loaded in SQL Server process (Including linked server dll’s, 3rd party backup dll’s etc.)

    + If you are using CLR extensively add some additional memory for CLR.

  3. Consider the memory requirement by jobs (Including replication agents, Log shipping etc. ) and packages that will run on the server. It can very from MB's to GB's according to number of jobs running. For medium sized server you can take it as 250 MB

  4. Make sure there is good enough free space for operating system.

    Approximately (100 MB for each GB till 4G) + (50 MB for each additional GB till 12GB) + (25 MB for each additional GB till your RAM size)

  5. Other memory requirements.

    If you have any other memory requirement specific to your environment.

    Max server memory= Total physical memory – (1+2+3+4+5+6+7)

    I have not included memory configuration for SSIS.SSRS,SSAS you would also need to subtract memory required by these services from total physical server memory.

    After you have configured above you need to monitor following counters

  • SQLServer:Buffer Manager--Page Life Expectancy(PLE):

  • SQLServer:Buffer Manager--CheckpointPages/sec:

  • SQLServer:Memory Manager--Memory Grants Pending:

  • SQLServer:memory Manager--Target Server Memory:

  • SQLServer:memory Manager--Total Server memory

For SQL Server 2012/2014.

From SQL Server 2012 onwards setting up max server memory has become easy. Because now max server memory almost accounts for all memory consumption. Max server memory controls SQL Server memory allocation, including the buffer pool, compile memory, all caches, qe memory grants, lock manager memory, and CLR memory (basically any “clerk” as found in dm_os_memory_clerks). Memory for thread stacks, heaps, linked server providers other than SQL Server, or any memory allocated by a “non SQL Server” DLL is not controlled by max server memory.

You can allocate 75-80% to SQL Server and then use perfmon counters to monitor memory usage. In SQL Server 2012 few perfmon counters have been deprecated. Buffer manager counter is deprecated you must use memory manager counter

  • SQL Server: Memory Manager-- Target Server Memory (KB)

  • SQL Server: Memory Manager--Total Server Memory (KB)

  • SQL Server: Memory Manager- Free Memory (KB)

  • SQL Server: Memory Manager--Database Cache Memory (KB)

On value of PLE, I have used formula by Joanthan and fortunately it has worked for me.