Sql-server – Do I need to use -f to fix insufficient max server memory settings

sql serversql-server-2012

SQL Server Version:

SELECT @@VERSION;

/*
Microsoft SQL Server 2012 - 11.0.2100.60 (X64) 
Developer Edition (64-bit) on Windows NT 6.1  (Build 7601: Service Pack 1)
*/

Issue:

So, I was playing around with max memory and I accidentally set max memory to 128MB and now, I cannot open a connection to SQL Server. When I tried to open a session and fix the mistake, I got this wonderful pop-up.

enter image description here

In order to save myself, I decided to add the -f parameter to SQL Server and bounce the service.

See more about that here:

Server Memory Server Configuration Options

and here:

Database Engine Service Startup Options

enter image description here

Once the service was re-started, I tried once again to connect. I got the following message:

enter image description here

This was expected according to the information I had about the -f flag. I then was able to open a connection using the Dedicated Admin Connection (DAC). I was then able to run sp_configure and alter the max memory setting. Here is what I saw in sp_configure before I made a change. It appears that the running value is again infinity (or close enough).

enter image description here

I ran this update to increase the max memory to 3GB:

USE master;
GO
sp_configure 
      'max server memory (MB)'
    , 3072;
GO 
RECONFIGURE;
GO

And then I removed the -f parameter and re-started SQL Server. At that point, the server became accessible in multi-user mode and I had a max memory value equal to 3GB. This is not the most elegant solution I have ever encountered; but, alas it worked; so I decided to share it here.

Does someone have a quicker and cleaner solution?

Best Answer

I was able to reset the memory by establishing a DAC connection and without ever resorting to the -f startup parameter. However, I had to be persistent:

  • At 128 MB, you will get all kinds of errors in the error log:

SQL Server has encountered 1 occurrence(s) of cachestore flush for the 'Object Plans' cachestore (part of plan cache) due to some database maintenance or reconfigure operations.

(repeat for other caches)

Failed allocate pages: FAIL_PAGE_ALLOCATION 1

Verbose DBCC MEMORYSTATUS dumps

errors about audit / extended events data potentially lost

There is insufficient system memory in resource pool 'internal/default' to run this query.

  • In some cases, the errors will be persistent enough to simply shut SQL Server down (and nothing will be recorded in the log). This happened twice during my experiments, and may also be influenced by the size of the databases attempting to be recovered on subsequent restarts. You should check whether any of the errors you receive while attempting to establish a DAC connection actually stem from the service not running (or perhaps in the process of shutting down).
  • In each case, I had to go to Configuration Manager and restart the service. On the second try, I think the service shut down (or was in the process) before I established the DAC connection. On the third try, there was a lot less recovery action on startup, and I was able to establish a DAC query window and adjust max server memory, then when I restarted, everything went back to normal.
  • Note that I was using 11.0.2376, and I recommend you perform this kind of edge case testing on builds you are likely to run in production. (Things like this may have been fixed post-RTM, and not all fixes are public.) If you're running RTM in production, then I kind of take that back, and will suggest that you reconsider that approach.

If you find that in your case you are also experiencing this "oops, SQL Server is no longer running" symptom, you could probably attach a debugger and determine exactly how SQL Server is shutting down.

So, can you do it without -f? Yes. Is it any cleaner? Only if you're really lucky. If your databases are large you are more likely to want to have a predictable number of restarts (2) because the recovery process could be a lot more involved than mine

Of course, an even cleaner thing would be to not mess around with max server memory using the GUI or ad hoc scripts that are prone to fudges - in production you want use a second set of eyes and have proper, validated, source controlled scripts that are much more predictable than "playing around." :-)