You should absolutely make the most use of the hardware when you are in an optimal config, and adjust when you are in maintenance mode. And yes, you will have an issue while both (or all four?) instances are active on the same node. Since a failover induces a service start on the now-active node, you can adjust the max memory of each server in that event using a startup procedure. I blogged about this here, but for a different reason (failing over to a node with a different amount of memory):
Basically, you just need to check if both instances are on the same node (and this will require a linked server to be set up in both directions), and adjust accordingly. A very quick and completely untested example based on my blog post and assuming there is only one instance on each node at a time presently (the question is a bit ambiguous if you have 2 total instances or 4):
CREATE PROCEDURE dbo.OptimizeInstanceMemory
AS
BEGIN
SET NOCOUNT ON;
DECLARE
@thisNode NVARCHAR(255) = CONVERT(NVARCHAR(255),
SERVERPROPERTY('ComputerNamePhysicalNetBIOS'),
@otherNode NVARCHAR(255),
@optimalMemory INT = 12288, -- 12 GB
@sql NVARCHAR(MAX);
SET @sql = N'SELECT @OtherNode = CONVERT(NVARCHAR(255),
SERVERPROPERTY(N''ComputerNamePhysicalNetBIOS''));';
EXEC [SERVER\INSTANCE].master..sp_executesql @sql,
N'@OtherNode NVARCHAR(255) OUTPUT', @OtherNode OUTPUT;
IF @thisNode = @otherNode
BEGIN -- we're on the same node, let's make everyone happy
SET @optimalMemory = 6144;
END
SET @sql = N'EXEC sp_configure N''max server memory'', @om;
RECONFIGURE WITH OVERRIDE;';
EXEC master..sp_executesql @sql, N'@om INT', @optimalMemory;
EXEC [SERVER\INSTANCE].master..sp_executesql @sql, N'@om INT', @optimalMemory;
END
GO
EXEC [master].dbo.sp_procoption
N'dbo.OptimizeInstanceMemory', 'startup', 'true';
Of course create it again on the other instance, swapping the linked server name used.
This gets a little more complex if you have to adjust depending on whether you are sharing the current node with 1, 2 or 3 other instances.
Note that this will cause other side effects such as clearing the plan cache (in the event when one of the instances didn't just restart or fail over, in which case the plan cache would be empty anyway), but these are arguably better than leaving both instances to assume they still have 12 GB of memory to play with - there will be a lot of thrashing if they're both heavily used.
You may also want to consider other options such as global maxdop, NUMA/CPU affinity etc. depending on how sensitive the system is to the amount of resources available.
The actual memory in use won't increase until you actually load data into memory. It's not going to jump to 10GB combined allocated just because that's what you've set max memory to and because your databases are at least that large. It's a max, not a min (and never mind that min doesn't work that way either), and it isn't going to try to guess - before you've run enough queries - which tables / indexes it should load into buffer pool memory.
So yes, it is normal to see a gap in these values. You can probably get much closer to the max values if you pump the buffer pool by issuing a bunch of SELECT * FROM dbo.BiggestTables;
on each instance, but it is usually better to let SQL Server allocate the memory as it deems appropriate based on actual usage by your real application. And remember too that max server memory doesn't just cover the buffer pool.
In short, SQL Server is pretty good at memory management; let it do its thing, and worry about adjusting max memory on your various instances when you have an actual performance issue.
Best Answer
The "max server memory" option only applies to the buffer pool, not to memory reserved for connections.