Bash – Redirecting stderr and stdout separately to a function in a bash script

bashio-redirectionlogsshell-scriptsystemd-journald

I'm working on a wrapper script to log cronjob output to journald.

I have several goals:

  • must log stderr and stdout to journald with different priority levels and prefix-tags
  • must also output stderr of the wrapped command to stderr of the wrapper script (to be caught by cron and emailed in an alert)
  • must preserve line order throughout

So far I seem to have two problems:

  1. I'm not sure how to redirect stderr and stdout separately to my logging function. Everything I've tried thus far fails to redirect anything more than NULL. In my defense, redirection is not one of my strong points. (Resolved, see comments)
  2. My current approach { $_EXEC $_ARGS 2>&1 1>&7 7>&- | journaldlog error; } 7>&1 1>&2 | journaldlog info appears to run the logging function exactly twice, once for stderr and once for stdout. This will not preserve line order. I would rather it run the function once per output line.

Should I just give up on doing this in bash and try Perl instead? I'm sure my Perl experience would come back to me… eventually.

I've tried a lot of different approaches. Most of which found online and, in particular, on stack exchange. I honestly can't remember what else I've tried… it's just a blur at this point. And the bash docs haven't been much help either.

Best Answer

You can just use process substitution directly in the redirections:

gen > >(one) 2> >(two)

That will give the standard output of gen as the standard input to one and the standard error of gen as the input to two. Those could be executable commands or functions; here are the functions I used:

one() {
    while read line
    do
        echo $'\e[31m'"$line"$'\e[22m'
    done
}

two() {
    while read line
    do
        echo $'\e[32m'"$line"$'\e[22m'
        echo "$line" >&2
    done
}

They output their input lines in red and green respectively, and two also writes to ordinary standard error. You could do whatever you wanted inside the loops, or send the input on to another program or whatever you required.


Note that there's not really any such thing as "preserving line order" once you're at this point - they're two separate streams and separate processes, and it's quite possible that the scheduler runs one process for a while before it gives the other one a go, with the data kept in the kernel pipe buffer until it's read. I've been using a function outputting odd numbers to stdout and even to stderr to test with, and it's common to get a run of a dozen or more in a row from each.

It would be possible to get something closer to the ordering as it appears in the terminal, but probably not in Bash. A C program using pipe and select could reconstruct an ordering (though still not guaranteed to be the same as displayed, for the same reason: your process might not be scheduled for a while and once there's a backlog, there's no telling what came first1). If the ordering is vitally important you'll need another approach, and likely the coöperation of the base executable. If it's a hard requirement you may need to reconsider.

1There may also be platform-specific methods for controlling the pipe buffering, but they're unlikely to help you enough either. On Linux, you can shrink the buffer size as low as the system page size but no lower. I don't know of one that lets you force writes through immediately (or blocks until they're read). In any case, they could well be buffered on the source end in the fashion of stdio streams and then you'd never see an ordering.

Related Question