I would suggest you to leave MIN Server memory to DEFAULT.
Min server memory controls the minimum amount of Physical memory that sql server will try to keep committed. When the SQL Server service starts, it does not acquire all the memory configured in Min Server Memory but instead starts with only the minimum required, growing as necessary. Once memory
usage has increased beyond the Min Server Memory setting, SQL Server won’t release any
memory below that amount.
Bob Dorr explains this settings as :
Min Server Memory
Use the min server memory setting with care. This is a floor to SQL Server. Once committed memory to reach the min server memory setting SQL Server won't release memory below the mark. If you set max server memory to 59GB and min server memory to 56GB, but the server needs to back SQL Server down to 53GB SQL Server won't drop below 56GB. When you combine this setting with locked pages in memory the memory can't be paged. This can lead to unwanted performance behaviors and allocation failures.
Searching the web -- internet has plenty of information for max server memory
This is because, this settings is least tuned (people just set it to default), instead max memory is what is generally tuned as it is the "ceiling" for the buffer pool. A good value for max memory will ensure that windows and other processes runing on the server will have enough physical memory to perform their work without forcing sql server to trim.
You should absolutely make the most use of the hardware when you are in an optimal config, and adjust when you are in maintenance mode. And yes, you will have an issue while both (or all four?) instances are active on the same node. Since a failover induces a service start on the now-active node, you can adjust the max memory of each server in that event using a startup procedure. I blogged about this here, but for a different reason (failing over to a node with a different amount of memory):
Basically, you just need to check if both instances are on the same node (and this will require a linked server to be set up in both directions), and adjust accordingly. A very quick and completely untested example based on my blog post and assuming there is only one instance on each node at a time presently (the question is a bit ambiguous if you have 2 total instances or 4):
CREATE PROCEDURE dbo.OptimizeInstanceMemory
AS
BEGIN
SET NOCOUNT ON;
DECLARE
@thisNode NVARCHAR(255) = CONVERT(NVARCHAR(255),
SERVERPROPERTY('ComputerNamePhysicalNetBIOS'),
@otherNode NVARCHAR(255),
@optimalMemory INT = 12288, -- 12 GB
@sql NVARCHAR(MAX);
SET @sql = N'SELECT @OtherNode = CONVERT(NVARCHAR(255),
SERVERPROPERTY(N''ComputerNamePhysicalNetBIOS''));';
EXEC [SERVER\INSTANCE].master..sp_executesql @sql,
N'@OtherNode NVARCHAR(255) OUTPUT', @OtherNode OUTPUT;
IF @thisNode = @otherNode
BEGIN -- we're on the same node, let's make everyone happy
SET @optimalMemory = 6144;
END
SET @sql = N'EXEC sp_configure N''max server memory'', @om;
RECONFIGURE WITH OVERRIDE;';
EXEC master..sp_executesql @sql, N'@om INT', @optimalMemory;
EXEC [SERVER\INSTANCE].master..sp_executesql @sql, N'@om INT', @optimalMemory;
END
GO
EXEC [master].dbo.sp_procoption
N'dbo.OptimizeInstanceMemory', 'startup', 'true';
Of course create it again on the other instance, swapping the linked server name used.
This gets a little more complex if you have to adjust depending on whether you are sharing the current node with 1, 2 or 3 other instances.
Note that this will cause other side effects such as clearing the plan cache (in the event when one of the instances didn't just restart or fail over, in which case the plan cache would be empty anyway), but these are arguably better than leaving both instances to assume they still have 12 GB of memory to play with - there will be a lot of thrashing if they're both heavily used.
You may also want to consider other options such as global maxdop, NUMA/CPU affinity etc. depending on how sensitive the system is to the amount of resources available.
Best Answer
You should always set your max memory away from default and leave some room for OS (see Jonathan's post of how much Memory to leave based on the amount of RAM installed).
Jonathan Kehayias has blogged about : How much memory does my SQL Server actually need?
You can also refer to my answer here for more details.
NO, dont leave it as default as problems like OS unresponsiveness, Working Set trimming as well as other applications running on the server will be affected adversely. It will affect your backups as well.
Note that Memory Manager for SQL Server 2012 and up has changed.