Following this, one can very well make that last plan of yours work. For the command to-be-sent not to be processed by the shell, it has to be in the form of a string when reaches the pipe (thus echo "command"
, not echo `command`
). Then it has to be read by a background process (alike a daemon, but not necessarily) started in the appropriate terminal. It should be evaluated by the same process.
But it is boiler-platey to have a script per pipe. So let's generalize making a script as term-pipe-r.sh
(don't forget to chmod +x
it!):
#!/bin/bash
pipe=$1 # the pipe name is the first argument
trap 'rm -f "$pipe"' EXIT # ignore exit and delete messages until the end
if [[ ! -p $pipe ]]; then # if the pipe doesn't exist, create it
mkfifo $pipe
fi
while true # cycle eternally..
do
if read line <$ pipe; then
if [[ "$line" == 'close the term-pipe pipe' ]]; then
break
# if the pipe closing message is received, break the while cycle
fi
echo # a line break should be used because of the prompt
eval $line # run the line: as this script should be started
fi # in the target terminal,
done # the line will be run there.
echo "<pipe closing message>" # custom message on the end of the script
So say you want /dev/tty3
to receive commands: just go there, do
./term-pipe-r.sh tty3pipe & # $1 will be tty3pipe (in a new process)
And to send commands, from any terminal (even from itself):
echo "command" > tty3pipe
or to run a file there:
cat some-script.sh > tty3pipe
Note this piping ignores files like .bashrc
, and the aliases in there, such as alias ls='ls --color'
. Hope this helps someone out there.
Edit (note - advantage of non-daemon):
Above I talked about the pipe reader not being a daemon necessarily, but in fact, I checked the differences, and it turns out it is way better to be a mere background process in this case. Because this way, when you close the terminal, an EXIT
signal (SIGHUP
, SIGTERM
, or whatever) is received by the script as well, and the pipe is deleted then (see the line starting with trap
in the script) automatically, avoiding a useless process and file (and maybe others if there were such redirecting to the useless pipe).
Edit (automation):
Still, it is boring to have to run a script you (I, at least) probably want most of the time. So, let's automatize it! It should start in any terminal, and one thing all of them read is .bashrc
. Plus, it sucks to have to use ./term-pipe-r.sh
. So, one may do:
cd /bin # go to /bin, where Bash gets command names
ln -s /directory/of/term-pipe-r.sh tpr # call it tpr (terminal pipe reader)
Now to run it you'd only need tpr tty3pipe &
in /dev/tty3
whenever you'd want. But why do that when you can have it done automatically? So this should be added to .bashrc
. But wait: how will it know the pipe name? It can base the name on the TTY (which can be know with the tty
command), using simple REGEX's in sed
(and some tricks). What you should add to ~/.bashrc
will then be:
pipe="$(sed 's/\/dev\///' <<< `tty` | sed 's/\///')pipe"
# ^^^- take out '/dev/' and other '/', add 'pipe'
tpr $pipe & # start our script with the appropriate pipe name
Best Answer
A shell function replacing
sponge
:This
mysponge
shell function passes all data available on standard input on to a temporary file.When all data has been redirected to the temporary file, the collected data is copied to the file named by the function's argument. If data is not to be appended to the file (i.e
-a
is not used), and if the given output filename refers to an existing regular file, if it does not exist, then this is done withmv
(in the case that the file is an existing regular file, an attempt is made to transfer the file modes to the temporary file using GNUchmod
first). If the output is to something that is not a regular file (a named pipe, standard output etc.), the data is outputted withcat
.If no file was given on the command line, the collected data is sent to standard output.
At the end, the temporary file is removed.
Each step in the function relies on the successful completion of the previous step. No attempt is made to remove the temporary file if one command fails (it may contain important data).
If the named file does not exist, then it will be created with the user's default permissions etc., and the data arriving from standard input will be written to it.
The
mktemp
utility is not standard, but it is commonly available.The above function mimics the behaviour described in the manual for
sponge
from themoreutils
package on Debian.Using
tee
in place ofsponge
would not be a viable option. You say that you've tried it and it seemed to work for you. It may work and it may not. It relies on the timing of when the commands in the pipeline are started (they are started pretty much concurrently), and the size of the input data file.The following is an example showing a situation where using
tee
would not work.The original file is 200000 bytes, but after the pipeline, it's truncated to 32 KiB (which could well correspond to some buffer size on my system).