When the shell exits, it might send the HUP signal to background jobs, and this might cause them to exit. The SIGHUP signal is only sent if the shell itself receives a SIGHUP, i.e. only if the terminal goes away (e.g. because the terminal emulator process dies) and not if you exit the shell normally (with the exit
builtin or by typing Ctrl+D). See In which cases is SIGHUP not sent to a job when you log out? and Is there any UNIX variant on which a child process dies with its parent? for more details. In bash, you can set the huponexit
option to also send SIGHUP to background jobs on a normal exit. In ksh, bash and zsh, calling disown
on a job removes it from the list of jobs to send SIGHUP to. A process that receives SIGHUP may ignore or catch the signal, and then it won't die. Using nohup
when you run a program makes it immune to SIGHUP.
If the process isn't killed due to a possible SIGHUP then it remains behind. There's nothing left to relate it to job numbers in the shell.
The process may still die if it tries to access the terminal but the terminal no longer exists. That depends how the program reacts to a non-existent terminal.
If the job contains multiple processes (e.g. a pipeline), then all these processes are in one process group. Process groups were invented precisely to capture the notion of a shell job that is made up of multiple related processes. You can see processes grouped by process group by displaying their process group ID (PGID — normally the process ID of the first process in the group), e.g. with ps l
under Linux or something like ps -o pid,pgid,tty,etime,comm
portably.
You can kill all the processes in a group by passing a negative argument to kill
. For example, if you've determined that the PGID for the pipeline you want to kill is 1234, then you can kill it with
kill -TERM -1234
I found this Oracle article about OOM Killer (Out Of Memory Killer) answer a half of your question, specially in 'Configuring the OOM Killer' chapter.
I extract from there two important commands (I think):
- Disable OOM Killer
root@host:~# sysctl vm.overcommit_memory=2
- Exclude a process from OOM Killer
root@host:~# echo -17 > /proc/<pid>/oom_adj
Other very interesting answer is 1.4 in this FAQ from stress project page, it says:
1.4 Why is my CPU getting hammered but not my RAM?
This is because stress if faily conservative in its default options. It is
pretty easy to render a system temporarly unusable by forcing the virtual
memory manager to thrash. So make sure you understand how much memory you
have and then pass the appropriate options. On a dual-core Intel system
with 3 GB of RAM a reasonable invocation is this:
stress -m 1 --vm-bytes 2G
Right, your question has not been answered yet. Let's look at
stress manual ...
-c, --cpu N
spawn N workers spinning on sqrt()
Maybe the above option could help, try to set it to zero. Oops, It doesn't work!?
After a look at the code I noticed that this option is disabled by default. And I've also noticed that --vm-hang option may be what you want.
The default action of --vm is spinning on malloc()/free(), and it's CPU intensive! --vm-hang makes stress program do a pause for seconds every time it allocates until free().
Try to use the following (consumes ~128MB of RAM):
root@host:~# stress --vm 1 --vm-bytes 128000000 --vm-hang 3600
And do a test in another terminal:
root@host:~# top
Best Answer
The jobs are not killed, they are suspended. They remain exactly as they are at the time of the suspension: same memory mapping, same open file, same threads, … It's just that the process sits there doing nothing until it's resumed. It's like when you pause a movie. A suspended process behaves exactly like a process that the scheduler stubbornly refuses to give CPU time to, except that the process state is recorded as suspended rather than running.