How a fork bomb works: in C (or C-like) code, a function named fork()
gets called. This causes linux or Unix or Unix-a-likes to create an entirely new process. This process has an address space, a process ID, a signal mask, open file descriptors, all manner of things that take up space in the OS kernel's somewhat limited memory. The newly created process also gets a spot in the kernel's data structure for processes to run. To the process that called fork()
, it looks like nothing happened. A fork-bomb process will try to call fork()
as fast as it can, as many times as it can.
The trick is that the newly created process also comes back from fork()
in the same code. After a fork, you have two processes running the same code. Each new fork-bomb process tries to call fork()
as fast as it can, as many times as it can. The code you've given as an example is a Bash-script version of a fork bomb.
Soon, all the OS kernel's process-related resources get used up. The process table is full. The waiting-to-run list of processes is full. Real memory is full, so paging starts. If this goes on long enough, the swap partition fills up.
What this looks like to a user: everything runs super slowly. You get error messages like "could not create process" when you try simple things like ls
. Trying a ps
causes an interminable pause (if it runs at all) and gives back a very long list of processes. Sometimes this situation requires a reboot via the power cord.
Fork bombs used to be called "rabbits" back in the old days. Because they reproduced so rapidly.
Just for fun, I wrote a fork bomb program in C:
#include <stdio.h>
#include <unistd.h>
int
main(int ac, char **av)
{
while (1)
fork();
return 0;
}
I compiled and ran that program under Arch Linux in one xterm. I another xterm I tried to get a process list:
1004 % ps -fu bediger
zsh: fork failed: resource temporarily unavailable
The Z shell in the 2nd xterm could not call fork()
successfully as the fork bomb processes associated with the 1st xterm had used up all kernel resources related to process created and running.
Is there anyway to stop this without rebooting the machine?
It's not quite impossible, and you can do it via luck -- i.e., you manage to kill all the processes before another one is spawned.1 But you have to get very very lucky, so it is not a reliable or worthwhile effort [maybe slm is luckier than me here, lol -- TBH I haven't tried that hard]. If you play around with priorities, your chances could improve (see man nice
), although I suspect this will also mess with the efficacy of the fork bomb.
A better idea might be to use one that times out. For an example in C, see footnote number 5 to my answer here.2 You can do the same thing with a shell script, albeit would not be as short as :(){ :|:& };:
:
#!/bin/bash
export fbomb_duration=$1
export fbomb_start=$(date +%s)
go () {
now=$(date +%s)
if [[ $(($now-$fbomb_start)) -gt $fbomb_duration ]]
then exit 0;
fi
go &
}
while ((1)); do
go
done
Execute that with one argument, a number of seconds. All forks will die after that time.
1 In fact, it can happen all on its own, eventually, if the kernel OOM killer gets lucky. But don't hold your breath.
2 The method used there to hamstring that particular bomb (by setting vm.overcommit_memory=2
) will almost certainly not work in general, but you could try. I'm not since I'd like to leave my system running for now ;)
Best Answer
This fork bomb always reminds me of the something an AI programming teacher said on one of the first lessons I attended "To understand recursion, first you must understand recursion".
At it's core, this bomb is a recursive function. In essence, you create a function, which calls itself, which calls itself, which calls itself.... until system resources are consumed. In this specific instance, the recursion is amplified by the use of piping the function to itself AND backgrounding it.
I've seen this answered over on StackOverflow, and I think the example given there illustrates it best, just because it's easier to see what it does at a glance (stolen from the link above...)
Define the bug function
☃() { ... }
, the body of which calls itself (the bug function), piping the output to itself (the bug function)☃|☃
, and background the result&
. Then, after the function is defined, actually call the bug function,; ☃
.I note that at least on my Arch VM, the need to background the process is not a requirement to have the same end result, to consume all available process space and render the host b0rked. Actually now I've said that it seems to sometimes terminate the run away process and after a screenful of
-bash: fork: Resource temporarily unavailable
it will stop with aTerminated
(andjournalctl
shows bash core dumping).To answer your question about csh/tcsh, neither of those shells support functions, you can only alias. So for those shells you'd have to write a shell script which calls itself recursively.
zsh seems to suffer the same fate (with the same code), does not core dump and causes Arch to give
Out of memory: Kill process 216 (zsh) score 0 or sacrifice child.
, but it still continues to fork. After a while it then statesKilled process 162 (systemd-logind) ...
(and still continues to have a forking zsh).Arch doesn't seem to have a
pacman
version of ksh, so I had to try it on debian instead. ksh objects to:
as a function name, but using something - sayb()
instead seems to have the desired result.