I am trying to run a shell script which will create process using a shell script. I get Resource temporarily unavailable error. how to identify which limit (memory/process/filecount) is creating this problem. Below is my ulimit -a
results.
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 563959
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 65535
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) unlimited
cpu time (seconds, -t) unlimited
max user processes (-u) 10000000
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Best Answer
For the case in the comments, where you were not using much memory per thread, you were hitting the cgroup limits. You will find the default to be around 12288, but the value is writable:
And if I use my "what is the thread limit" program (found here) to check, before:
and after:
Of course, the numbers above are not exact because the "doug" user has a few other threads running, such as my SSH sessions to my sever. Check with:
Program used:
See also here
EDIT May, 2020: For newer versions of Ubuntu, the default maximum PID number is now 4194304, and therefore adjusting it is not needed.
Now, if you have enough memory, the next limit will be defined by the default maximum PID number, which is 32768, but is also writable. Obvioulsy in order to have more than 32768 simultaneous processes or tasks or threads their PID will have to be allowed to be higher:
Note that is quite on purpose that a number bigger than 2**16 was chosen, to see if it was actually allowed. And so now, set the cgroup max to, say 70000:
And at this point, realize that the above listed program seems to have a limit of about 32768 threads, even if resources are still available, and so use another method. My test server with 16 gigabytes of memory seems to exhaust some other resource at about 62344 tasks, even though there does seem to still be memory available.
top:
Seems I finally hit my default ulimit settings for both user processes and number of timers (signals):
If I raise those limits, in my case, I did it via
/etc/security/limits.conf
:I am able to go to 126020 threads, before the return of the inability to fork. This time the limit was (keep in mind that there are about `150 root owned threads on this server, before the test starts):
O.K. so now adjusting that parameter:
I can get to about 132,000 threads before my 16 gigabyte server starts to swap memory, and trouble errupts.
Note: running top places a significant additional load on the system under these conditions, so I didn't run it. However memory:
At some point you will get into trouble, but it is absolutely amazing how gracefully the system bogs down. Once my system starts to swap, it totally boggs down and I had many of these errors:
And my load average ballooned to ~29000. But I just left the computer for an hour and it sorted itself out. I staggered the spin out of the threads by 200 microseconds per spin out, and that also seemed to help.