Sql-server – Why is the Target Server Memory slightly higher than the Total Server Memory

memoryperformancesql-server-2012

I'm doing a performance test on a high traffic (16000+ users, estimated 1000+ concurrent at peak) Microsoft Dynamics database.
The server has gone on line a few months ago, and I'm doing some tests to see how it is handling the current traffic.

While checking some performance counters during business hours I came across an oddity in the target and total server memory.
While I would only flag this as being an issue if the target memory was clearly higher than the total memory, combined with perhaps other counters (depending on the amount of difference) the results here are somewhat odd.

Using:

SELECT object_name, counter_name, cntr_value FROM sys.dm_os_performance_counters 
where (object_name like '%Buffer Manager%' and counter_name like '%page life%')
or (object_name like '%Memory Manager%' and counter_name like '%memory grants pending%')
or (object_name like '%Memory Manager%' and counter_name like '%target server memory%')
or (object_name like '%Memory Manager%' and counter_name like '%total server memory%');

I came to the following results:

counter_name                 |    cntr_value  
------------------------------------------------
Page life expectancy         |    14887
Memory Grants Pending        |    0
Target Server Memory (KB)    |    44040192
Total Server Memory (KB)     |    44040184

Clearly there is currently no memory pressure, and the page life expectancy is higher than required. However it looks like at some point the sql server was exactly 8 KB short of memory.

Something about these numbers seems odd to me, but I can't put my finger on it.
Is this an indication that at some point the sql server was on the verge of a memory problem?
Am I correct in thinking that because these numbers are so similar, I can dismiss it for now, and test again when a decent amount of time has passed?

Best Answer

I must say this is normal and nothing to worry about. This blog tells how target server memory calculated. Please see the forumula below

Target1 = Current committed pages of SQL Server + ( Available Physical Memory - min (Total Physical Memory Pages / 20, Available Physical Memory Pages / 2))

ullAvailPageFile: The maximum amount of memory the current process can commit, in bytes. This value is equal to or smaller than the system-wide available commit value. To calculate the system-wide available commit value, call GetPerformanceInfo and subtract the value of CommitTotal from the value of CommitLimit.

If (Max Server Memory < ullAvailPageFile)

{

Target2= Max Server Memory

}

Else

{

Target2=Total Physical Memory

}

Target Server Memory (KB) =Minimum (Target1,Target2)

So if AvailablePhysicalMemory is very high (or) when MaxServermemory is low then Target Server Memory (KB) would give you the MaxServerMemory else value derived from above formula.

As per my findings using this formula even if target server memory is bit greater than total it is fine. And this is by virtue of how the value is calculated.

If target server memory is greater that total server memory it does not always signify a memory pressure. You must come out of this belief(if you have one). Like you mentioned other counters have to be taken into picture. SQl Server changes its target and total server memory values many a times and this depends on load on system. Consider a scenario where you run index rebuild for all big tables there would be flurry of activity and PLE would certainly drop increasing target memory but this does not means it is a memory pressure. SQL Server memory just adjusted itslef to current load.

In ideal world you should have RAM such that it can shold all databases in memory but this is many a times not possible.