Linux max threads count

linuxmultithreading

my server has been running with Amazon Ec2 linux. I have a mongodb server inside. The mongodb server has been running under heavily load, and, unhappily , I've ran into a problem with it :/

As known, the mongodb creates new thread for every client connection, and this worked fine before. I don't know why, but MongoDB can't create more than 975 connections on the host as a non-privileged user ( it runs under a mongod user) . But when I'm running it as a root user, it can handle up to 20000 connections(mongodb internal limit). But, further investigations show, that problem isn't the MongoDB server, but a linux itself.

I've found a simple program, which checks max connections number:

/* compile with:   gcc -lpthread -o thread-limit thread-limit.c */
/* originally from: http://www.volano.com/linuxnotes.html */

#include <stdlib.h>
#include <stdio.h>
#include <unistd.h>
#include <pthread.h>
#include <string.h>

#define MAX_THREADS 100000
#define PTHREAD_STACK_MIN 1*1024*1024*1024
int i;

void run(void) {
  sleep(60 * 60);
}

int main(int argc, char *argv[]) {
  int rc = 0;
  pthread_t thread[MAX_THREADS];
  pthread_attr_t thread_attr;

  pthread_attr_init(&thread_attr);
  pthread_attr_setstacksize(&thread_attr, PTHREAD_STACK_MIN);

  printf("Creating threads ...\n");
  for (i = 0; i < MAX_THREADS && rc == 0; i++) {
    rc = pthread_create(&(thread[i]), &thread_attr, (void *) &run, NULL);
    if (rc == 0) {
      pthread_detach(thread[i]);
      if ((i + 1) % 100 == 0)
    printf("%i threads so far ...\n", i + 1);
    }
    else
    {
      printf("Failed with return code %i creating thread %i (%s).\n",
         rc, i + 1, strerror(rc));

      // can we allocate memory?
      char *block = NULL;
      block = malloc(65545);
      if(block == NULL)
        printf("Malloc failed too :( \n");
      else
        printf("Malloc worked, hmmm\n");
    }
  }
sleep(60*60); // ctrl+c to exit; makes it easier to see mem use
  exit(0);
}

And the sutuation is repeated again, as root user I can create around 32k threads, as non-privileged user(mongod or ec2-user ) around 1000 .

This is an ulimit for root user:

core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 59470
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 60000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 1024
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

This is an ulimit for mongod user:

core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 59470
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 60000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 1024
cpu time               (seconds, -t) unlimited
max user processes              (-u) 1024
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

Kernel max threads:

bash-4.1$ cat /proc/sys/kernel/threads-max 
118940

SELinux is disabled. Don't know how to solve this strange problem…Possibly, somebody does?

Best Answer

Your issue is the max user processes limit.

From the getrlimit(2) man page:

RLIMIT_NPROC The maximum number of processes (or, more precisely on Linux, threads) that can be created for the real user ID of the calling process. Upon encountering this limit, fork(2) fails with the error EAGAIN.

Same for pthread_create(3):

EAGAIN Insufficient resources to create another thread, or a system-imposed limit on the number of threads was encountered. The latter case may occur in two ways: the RLIMIT_NPROC soft resource limit (set via setrlimit(2)), which limits the number of process for a real user ID, was reached; or the kernel's system-wide limit on the number of threads, /proc/sys/kernel/threads-max, was reached.

Increase that limit for your user, and it should be able to create more threads, until it reaches other resource limits.
Or plain resource exhaustion - for 1Mb stack and 20k threads, you'll need a lot of RAM.
See also NPTL caps maximum threads at 65528?: /proc/sys/vm/max_map_count could become an issue at some point.

Side point: you should use -pthread instead of -lpthread. See gcc - significance of -pthread flag when compiling.

Related Question