I have a production box with 5 instances on.
Version: SQL Server 2014 SP3 Enterprise.
I discovered that even though the machine has ~400GB memory assigned, the combined MAX MEMORY settings on all 5 instances is less than 200GB.
The instances don't suffer from memory pressure, but as we already have that memory assigned, it's a waste not to use it.
I would like to increase it to higher value, leaving 10% for the OS (some instances will be assigned higher than others).
However, I've never done such a large increase of this setting before.
I know it’s a dynamic settings which doesn’t require a restart, however, I'm wondering on the following:
- Should I make the increase in 2 phases (i.e increase half the amount, wait a week and then increase it again )?
- Can something suddenly break (or make a performance degradation) if SQL server suddenly has so much more memory to play with ?
We don't have lock pages in memory setting enabled, and we don't have traceflag 834 on.
Best Answer
I have a slightly different opinion.
If you are not experiencing memory pressure and your application is just working fine, why introduce an unknown variable (in your case - increasing sql server max memory) ? You have not tested this change, so its a risk.
If you have more bandwidth on your server, you can look into consolidating more instances on your server.
Yes, and I faced a perf degradation issue with PROD having more memory and using TF 2335 fixed the issue.
you can read Paul's answer for more details but the gist is :