So I asked on the #systemd IRC channel and it turns out that journald (the logging daemon of systemd) does not periodically flush the logs to disk at all. This means that your logs are always at risk at any time.
Sending SIGUSR2
to the journald
causes logs to be written to disk, but if you do this multiple times, many files will be created. (the option is actually described as "log rotating").
In the end, I decided to go with another suggestion: using a dedicated syslog daemon for collecting kernel logs. As rsyslog was suggested (and I had already experience with it), I explored that option further. I have written some more details in the Arch Wiki about using rsyslog.
The idea is to run rsyslog, collecting only data from the kernel facility. As rsyslog reads from /proc/kmsg
(which allows only a single reader) and journald reads from /dev/kmsg
(multiple readers allowed), there is no way that the daemons lose logs (very important to me!). Configure rsyslog to write kernel messages to a file and make sure that this file is rotated to prevent eating your disk space.
This solution is not perfect:
- Other logs (for example, from NetworkManager) are lost. This could be solved by forwarding more logs from syslog to journald (this means duplication!)
- Duplication of logs. The kernel messages are written to two files. This is a non-issue, in general the number of logs are small and you would rather have more copies of the logs than none. You can also use fast tools like
grep
on the single log file or the more slower, but fancier journalctl
.
There is a TODO item for flushing logs more frequently, but that is still not reliable enough:
journal: send out marker messages every now and then, and immediately sync with fdatasync() afterwards, in order to have hourly guaranteed syncs.
Now, hopefully systemd/journald will get an option to write the logs to disk, but meanwhile we can combine tools to achieve the goal.
Following this, one can very well make that last plan of yours work. For the command to-be-sent not to be processed by the shell, it has to be in the form of a string when reaches the pipe (thus echo "command"
, not echo `command`
). Then it has to be read by a background process (alike a daemon, but not necessarily) started in the appropriate terminal. It should be evaluated by the same process.
But it is boiler-platey to have a script per pipe. So let's generalize making a script as term-pipe-r.sh
(don't forget to chmod +x
it!):
#!/bin/bash
pipe=$1 # the pipe name is the first argument
trap 'rm -f "$pipe"' EXIT # ignore exit and delete messages until the end
if [[ ! -p $pipe ]]; then # if the pipe doesn't exist, create it
mkfifo $pipe
fi
while true # cycle eternally..
do
if read line <$ pipe; then
if [[ "$line" == 'close the term-pipe pipe' ]]; then
break
# if the pipe closing message is received, break the while cycle
fi
echo # a line break should be used because of the prompt
eval $line # run the line: as this script should be started
fi # in the target terminal,
done # the line will be run there.
echo "<pipe closing message>" # custom message on the end of the script
So say you want /dev/tty3
to receive commands: just go there, do
./term-pipe-r.sh tty3pipe & # $1 will be tty3pipe (in a new process)
And to send commands, from any terminal (even from itself):
echo "command" > tty3pipe
or to run a file there:
cat some-script.sh > tty3pipe
Note this piping ignores files like .bashrc
, and the aliases in there, such as alias ls='ls --color'
. Hope this helps someone out there.
Edit (note - advantage of non-daemon):
Above I talked about the pipe reader not being a daemon necessarily, but in fact, I checked the differences, and it turns out it is way better to be a mere background process in this case. Because this way, when you close the terminal, an EXIT
signal (SIGHUP
, SIGTERM
, or whatever) is received by the script as well, and the pipe is deleted then (see the line starting with trap
in the script) automatically, avoiding a useless process and file (and maybe others if there were such redirecting to the useless pipe).
Edit (automation):
Still, it is boring to have to run a script you (I, at least) probably want most of the time. So, let's automatize it! It should start in any terminal, and one thing all of them read is .bashrc
. Plus, it sucks to have to use ./term-pipe-r.sh
. So, one may do:
cd /bin # go to /bin, where Bash gets command names
ln -s /directory/of/term-pipe-r.sh tpr # call it tpr (terminal pipe reader)
Now to run it you'd only need tpr tty3pipe &
in /dev/tty3
whenever you'd want. But why do that when you can have it done automatically? So this should be added to .bashrc
. But wait: how will it know the pipe name? It can base the name on the TTY (which can be know with the tty
command), using simple REGEX's in sed
(and some tricks). What you should add to ~/.bashrc
will then be:
pipe="$(sed 's/\/dev\///' <<< `tty` | sed 's/\///')pipe"
# ^^^- take out '/dev/' and other '/', add 'pipe'
tpr $pipe & # start our script with the appropriate pipe name
Best Answer
When a named pipe is created, via mkfifo (or however else you can do it), it creates a pipe "file" that remains in place until it is removed (or, in some cases, until your machine reboots, if you forget to remove it). You can create your own named pipe with mkfifo simply, as it takes very few arguments, like so:
That's all it takes to create the named pipe /tmp/corncob. The -m flag, which is used to set the permissions, is not necessary. Generally, if you don't include it, the default permission set for a new named pipe is whatever the default for your system would be. As another side note, you can also pass the -m flag and set alpha permissions, rather than octal, like:
to create the exact same thing. You can delete the named pipe just like you delete a file. rm, and it's gone.
One thing you should note about named pipes is that they generally (so far as I've seen) are only able to fully pass one stream of input/output through themselves at a time. That is to say, if you have one process sending input to the named pipe and two process reading from it, only one of the reading processes will receive output. It should be noted, also, that, if such a situation were to exist, once the original process that was receiving output exits, the other process would begin receiving output from the named pipe (if it was still attempting to read from it). Was that a really long sentence or am I just typing fast? ;)
An example of what I mean below:
via