Linux – why is kernel.shmall so low by default

limitlinuxmemorySecurityshared memory

I run DB2 on Linux where I have to allocate the vast majority of memory on the machine to shared memory segments.

This page is typical of the info that I've found about shmall/shmmax:
http://www.pythian.com/news/245/the-mysterious-world-of-shmmax-and-shmall/

My system is running fine now, but I'm wondering if there's a historical or philosophical reason why shared memory is so low by default. In other words, why not let shmall default to the max physical memory on the machine?

Or in other words, why should a typical admin need to be 'protected from himself' if an app happens to use a lot of shared memory, and have to go in and change these settings? The only thing I can think of is that it does let me set an upper bound to how much memory DB2 can use, but that's a special case.

Best Answer

Shared memory is not always a protected resource. As such many users can allocate shared memory. It is also not automatically returned to the memory pool when the process which allocated it dies. This can result in shared memory allocations which have been allocated but not used. This results in a memory leak that may not be obvious.

By keeping shared memory limits low, most processes which use shared memory (in small amounts) can run. However, the potential damage is limited. The only systems I have uses which require large amounts of shared memory are database servers. These usually are administered by system administrators who are aware of the requirements. If not, the DBA usually is aware of the requirement and can ask for appropriate configuration changes. The database installation instructions usually specify how to calculate and set the appropriate limits.

I have had databases die and leave large amounts of shared memory allocated, but unused. This created problems for users of the system, and prevented restarting the database. Fortunately, there where tools which allowed the memory to be located and released.

Related Question