I often use more than one terminal (or terminal emulator) at a time; and while in X I can copy-paste commands, besides not being very practical, it obviously does not work on the real TTY. The first idea that occurs to me is something alike:
command > /dev/sometty
Unfortunately, the command
is run before it is piped, and no tricks like echo `command`
will work, no matter how many weird bash characters ($
, `
, "
, etc.) are there. So /dev/sometty
simply gets some text.
The problem is that, sometimes, it is even worth for I to pipe those commands into a file, make it executable, etc., or shortly: making a script and running it from the appropriate terminal. But that is a lot of work. I've been thinking of making a script to make these files, for an instruction:
termpipe -e "command1\ncommand2\"command 1 argument\\n(s)\"" -t /dev/sometty
it would do something like:
- pipe the commands to, say,
/tmp/termpipe-20140208-153029-1.sh
- make the file executable
- run the file in the appropriate terminal
- delete the script when it is done executing
AFAIK, the problem is in 3.: this doesn't solve any problem, as I'd need another termpipe
instance to run the first on the appropriate terminal. And one for that one. And another for that, ad infinitum. So this cannot work. Or can it?…
The solution could be to use a named pipe per terminal, and when each starts, a script would tell the pipe to forward anything it receives to the terminal, and then to execute it (like a sort of daemon).
I think this might work but I don't know how to set up the initial script. How can I? How can I tell the FIFO to give piped commands for the respective terminal to run? I don't know much Bash, so I'd appreciate full explanations.
Best Answer
Following this, one can very well make that last plan of yours work. For the command to-be-sent not to be processed by the shell, it has to be in the form of a string when reaches the pipe (thus
echo "command"
, notecho `command`
). Then it has to be read by a background process (alike a daemon, but not necessarily) started in the appropriate terminal. It should be evaluated by the same process.But it is boiler-platey to have a script per pipe. So let's generalize making a script as
term-pipe-r.sh
(don't forget tochmod +x
it!):So say you want
/dev/tty3
to receive commands: just go there, doAnd to send commands, from any terminal (even from itself):
or to run a file there:
Note this piping ignores files like
.bashrc
, and the aliases in there, such asalias ls='ls --color'
. Hope this helps someone out there.Edit (note - advantage of non-daemon):
Above I talked about the pipe reader not being a daemon necessarily, but in fact, I checked the differences, and it turns out it is way better to be a mere background process in this case. Because this way, when you close the terminal, an
EXIT
signal (SIGHUP
,SIGTERM
, or whatever) is received by the script as well, and the pipe is deleted then (see the line starting withtrap
in the script) automatically, avoiding a useless process and file (and maybe others if there were such redirecting to the useless pipe).Edit (automation):
Still, it is boring to have to run a script you (I, at least) probably want most of the time. So, let's automatize it! It should start in any terminal, and one thing all of them read is
.bashrc
. Plus, it sucks to have to use./term-pipe-r.sh
. So, one may do:Now to run it you'd only need
tpr tty3pipe &
in/dev/tty3
whenever you'd want. But why do that when you can have it done automatically? So this should be added to.bashrc
. But wait: how will it know the pipe name? It can base the name on the TTY (which can be know with thetty
command), using simple REGEX's insed
(and some tricks). What you should add to~/.bashrc
will then be: