Bash – Interrupt shell command line expansion

bashcommand lineinterruptkill

Warning: DO NOT attempt the commands listed in this question without knowing their implications.

Sorry if this is a duplicate. I am surprised to learn that a command as simple as

echo $(yes)

freezes my computer (actually it is lagging the computer very badly rather than freezing, but the lag is bad enough to make one think it has frozen). Typing CtrlC or CtrlZ right after typing this command does not seem to help me recover from this mistyped command.

On the other hand

ls /*/../*/../*/../*/../*/

is a well-known vulnerability that also lags the computer badly to the best and crashes the computer to the worst.

Note that these commands are quite different from the well-known fork bombs.

My question is: Is there a way to interrupt such commands which build up huge amount of shell command line options immediately after I start to execute them in the shell?

My understanding is that since shell expansion is done before the command is executed, the usual way to interrupt a command does not work because the command is not even running when the lag happens, but I also want to confirm that my understanding is correct, and I am extremely interested to learn any way to cancel the shell expansion before it uses too much memory.

I am not looking for how the kernel works at low memory. I am also not looking for SysRq overkills that may be helpful when the system already lags terribly. Nor am I looking for preventative approaches like imposing a ulimit on memory. I am looking for a way that can effectively cancel a huge shell expansion process from within the shell itself before it lags the system. I don't know whether it is possible. If it is impossible as commented, please also leave an answer indicating that, preferably with explanations.

I have chosen not to include any system-specific information in the original question because I want a general answer, but in case this matters, here are the information about my system: Ubuntu 16.04.4 LTS with gnome-terminal and bash 4.3.48(1), running a x86_64 system. No virtual machines involved.

Best Answer

As of GNU bash, version 4.4.19(1)-release (x86_64-pc-linux-gnu) and I am not using a VM:

echo $(yes) 

exists the shell and does not freeze the system, and:

ls /*/../*/../*/../*/../*/

returns

bash: /bin/ls: Argument list too long

But as a rule, when you dealing with something that could get all the resources of a system is better to set limits before running, if you know that a process could be a cpu hog you could start it with cpulimit or run renice.

If you want to limit the processes that are already started, you will have to do it one by one by PID, but you can have a batch script to do that like the one below:

#!/bin/bash
LIMIT_PIDS=$(pgrep tesseract)   # PIDs in queue replace tesseract with your name
echo $LIMIT_PIDS
for i in $LIMIT_PIDS
do
    cpulimit -p $i -l 10 -z &   # to 10 percent processes
done

In my case pypdfocr launches the greedy tesseract.

Also in some cases were your CPU is pretty good you can just use a renice like this:

watch -n5 'pidof tesseract | xargs -L1 sudo renice +19'
Related Question