Sql-server – Max Memory settings on Multi-Instance SQL Server 2008 R2 Cluster

best practicesclusteringconfigurationmemorysql server

  • SQL Server 2008 R2 multi-instance cluster (on VMware vSphere 5.1)
  • 2 nodes, each running 2 instances (4 instances in total)
  • 16GB RAM per node.

The only time the instances are on the same node is when I am patching the other node.

Brent Ozar's "maximum server memory" recommendation is to leave Windows 4GB or 10%, whichever is more.

Since this a cluster, how should I set the max memory on each node? Should I treat each as a standalone server? This would make sure memory on each node is not wasted. However, in a node failure, the max memory total for all 4 instances would exceed system memory of the single node. Will this cause any issues in the timeframe until we get the second node recovered? Do I need to lower the max memory setting on the 4 instances until the secondary node is recovered? Or is SQL Server smart enough to keep working (using page file if necessary).

Best Answer

You should absolutely make the most use of the hardware when you are in an optimal config, and adjust when you are in maintenance mode. And yes, you will have an issue while both (or all four?) instances are active on the same node. Since a failover induces a service start on the now-active node, you can adjust the max memory of each server in that event using a startup procedure. I blogged about this here, but for a different reason (failing over to a node with a different amount of memory):

Basically, you just need to check if both instances are on the same node (and this will require a linked server to be set up in both directions), and adjust accordingly. A very quick and completely untested example based on my blog post and assuming there is only one instance on each node at a time presently (the question is a bit ambiguous if you have 2 total instances or 4):

CREATE PROCEDURE dbo.OptimizeInstanceMemory
AS
BEGIN
   SET NOCOUNT ON;

   DECLARE
     @thisNode      NVARCHAR(255) = CONVERT(NVARCHAR(255),
                                  SERVERPROPERTY('ComputerNamePhysicalNetBIOS'),
     @otherNode     NVARCHAR(255),
     @optimalMemory INT = 12288, -- 12 GB
     @sql           NVARCHAR(MAX);

  SET @sql = N'SELECT @OtherNode = CONVERT(NVARCHAR(255), 
                        SERVERPROPERTY(N''ComputerNamePhysicalNetBIOS''));';

  EXEC [SERVER\INSTANCE].master..sp_executesql @sql, 
    N'@OtherNode NVARCHAR(255) OUTPUT', @OtherNode OUTPUT;

  IF @thisNode = @otherNode
  BEGIN -- we're on the same node, let's make everyone happy
    SET @optimalMemory = 6144;
  END

  SET @sql = N'EXEC sp_configure N''max server memory'', @om;
    RECONFIGURE WITH OVERRIDE;';

  EXEC                   master..sp_executesql @sql, N'@om INT', @optimalMemory;
  EXEC [SERVER\INSTANCE].master..sp_executesql @sql, N'@om INT', @optimalMemory;
END
GO

EXEC [master].dbo.sp_procoption 
  N'dbo.OptimizeInstanceMemory', 'startup', 'true';

Of course create it again on the other instance, swapping the linked server name used.

This gets a little more complex if you have to adjust depending on whether you are sharing the current node with 1, 2 or 3 other instances.

Note that this will cause other side effects such as clearing the plan cache (in the event when one of the instances didn't just restart or fail over, in which case the plan cache would be empty anyway), but these are arguably better than leaving both instances to assume they still have 12 GB of memory to play with - there will be a lot of thrashing if they're both heavily used.

You may also want to consider other options such as global maxdop, NUMA/CPU affinity etc. depending on how sensitive the system is to the amount of resources available.