Linux – “Leaky” pipes in linux

bufferfifolinuxpipe

Let's assume you have a pipeline like the following:

$ a | b

If b stops processing stdin, after a while the pipe fills up, and writes, from a to its stdout, will block (until either b starts processing again or it dies).

If I wanted to avoid this, I could be tempted to use a bigger pipe (or, more simply, buffer(1)) like so:

$ a | buffer | b

This would simply buy me more time, but in the end a would eventually stop.

What I would love to have (for a very specific scenario that I'm addressing) is to have a "leaky" pipe that, when full, would drop some data (ideally, line-by-line) from the buffer to let a continue processing (as you can probably imagine, the data that flows in the pipe is expendable, i.e. having the data processed by b is less important than having a able to run without blocking).

To sum it up I would love to have something like a bounded, leaky buffer:

$ a | leakybuffer | b

I could probably implement it quite easily in any language, I was just wondering if there's something "ready to use" (or something like a bash one-liner) that I'm missing.

Note: in the examples I'm using regular pipes, but the question equally applies to named pipes


While I awarded the answer below, I also decided to implement the leakybuffer command because the simple solution below had some limitations: https://github.com/CAFxX/leakybuffer

Best Answer

Easiest way would be to pipe through some program which sets nonblocking output. Here is simple perl oneliner (which you can save as leakybuffer) which does so:

so your a | b becomes:

a | perl -MFcntl -e \
    'fcntl STDOUT,F_SETFL,O_NONBLOCK; while (<STDIN>) { print }' | b

what is does is read the input and write to output (same as cat(1)) but the output is nonblocking - meaning that if write fails, it will return error and lose data, but the process will continue with next line of input as we conveniently ignore the error. Process is kind-of line-buffered as you wanted, but see caveat below.

you can test with for example:

seq 1 500000 | perl -w -MFcntl -e \
    'fcntl STDOUT,F_SETFL,O_NONBLOCK; while (<STDIN>) { print }' | \
    while read a; do echo $a; done > output

you will get output file with lost lines (exact output depends on the speed of your shell etc.) like this:

12768
12769
12770
12771
12772
12773
127775610
75611
75612
75613

you see where the shell lost lines after 12773, but also an anomaly - the perl didn't have enough buffer for 12774\n but did for 1277 so it wrote just that -- and so next number 75610 does not start at the beginning of the line, making it little ugly.

That could be improved upon by having perl detect when the write did not succeed completely, and then later try to flush remaining of the line while ignoring new lines coming in, but that would complicate perl script much more, so is left as an exercise for the interested reader :)

Update (for binary files): If you are not processing newline terminated lines (like log files or similar), you need to change command slightly, or perl will consume large amounts of memory (depending how often newline characters appear in your input):

perl -w -MFcntl -e 'fcntl STDOUT,F_SETFL,O_NONBLOCK; while (read STDIN, $_, 4096) { print }' 

it will work correctly for binary files too (without consuming extra memory).

Update2 - nicer text file output: Avoiding output buffers (syswrite instead of print):

seq 1 500000 | perl -w -MFcntl -e \
    'fcntl STDOUT,F_SETFL,O_NONBLOCK; while (<STDIN>) { syswrite STDOUT,$_ }' | \
    while read a; do echo $a; done > output

seems to fix problems with "merged lines" for me:

12766
12767
12768
16384
16385
16386

(Note: one can verify on which lines output was cut with: perl -ne '$c++; next if $c==$_; print "$c $_"; $c=$_' output oneliner)

Related Question