If you get rid of the killing and shutdown stuff (which is unsafe and you may, in an extreme, but not unfathomable case when child.py
dies before the (head -n 1 shutdown; kill -9 $parent) &
subshell does end up kill -9
ing some innocent process),
then child.py
won't be terminating because your parent.py
isn't behaving like a good UNIX citizen.
The cat std_out &
subprocess will have finished by the time you send the quit
message, because the writer to std_out
is child_original.py
, which finishes upon receiving quit
at which moment it closes its stdout
, which is the std_out
pipe and that close
will make the cat
subprocess finish.
The cat > std_in
isn't finishing because it's reading from a pipe originating in the parent.py
process and the parent.py
process didn't bother to close that pipe. If it did, cat > stdin_in
and consequently the whole child.py
would finish by itself and you wouldn't need the shutdown pipe or the killing
part (killing a process that isn't your child on UNIX is always a potential security hole if a race condition caused due to rapid PID recycling should occur).
Processes at the right end of a pipeline generally only finish once they're done reading their stdin, but since you're not closing that (child.stdin
), you're implicitly telling the child process "wait, I have more input for you" and then you go kill it because it does wait for more input from you as it should.
In short, make parent.py
behave reasonably:
from __future__ import print_function
from subprocess import Popen, PIPE
import os
child = Popen('./child.py', stdin=PIPE, stdout=PIPE)
for letter in 'abcde':
print('Parent writes to child: ', letter)
child.stdin.write(letter+'\n')
child.stdin.flush()
response = child.stdout.readline()
print('Response from the child:', response)
assert response.rstrip() == letter.upper(), 'Wrong response'
child.stdin.write('quit\n')
child.stdin.flush()
child.stdin.close()
print('Waiting for the child to terminate...')
child.wait()
print('Done!')
And your child.py
can be as simple as
#!/bin/sh
cat std_out &
cat > std_in
wait #basically to assert that cat std_out has finished at this point
(Note that I got rid of that fd dup calls because otherwise you'd need to close both child.stdin
and the child_stdin
duplicate).
Since parent.py
operates in line-oriented fashion, gnu cat
is unbuffered (as mikeserv pointed out) and child_original.py
operates in a line oriented fashion, you've effectively got the whole thing line-buffered.
Note on Cat: Unbufferred might not be the luckiest term, as gnu cat
does use a buffer. What it doesn't do is try to get the whole buffer full before writing things out (unlike stdio). Basically it makes read requests to the os for a specific size (its buffer size), and writes whatever it receives without waiting to get a whole line or the whole buffer. (read(2) can be lazy and give you only what it can give you at the moment rather than the whole buffer you've asked for.)
(You can inspect the source code at http://git.savannah.gnu.org/cgit/coreutils.git/tree/src/cat.c ; safe_read
(used instead of plain read
) is in the gnulib
submodule and it's a very simple wrapper around read(2) that abstracts away EINTR
(see the man page)).
If the output of inotifywait -q -m ./
is not redirected and you're running it in a terminal emulator, the output will go to a pty device. A pty
device is a form of interprocess communication, a bit like a pipe though with added features to facilitate terminal-like interactions.
At the other end of that pty "pipe", your terminal emulator will read what inotifywait
writes and render it on the screen. Doing that rendering is complicated and expensive in CPU time.
If your terminal emulator is slower to empty that pipe than inotifywait
is to fill it up, then the pty pipe will get full. When it is full, like for pipes, the writing process blocks (the write()
system calls doesn't return) until there's free space again in the "pipe".
With my version of Linux, I find that I can write 19457 bytes to a pty device with nothing reading at the other end before it blocks if I write 1 byte at a time:
$ socat -u 'exec:dd bs=1 if=/dev/zero,pty' 'exec:sleep inf,nofork' &
[1] 1247815
$ pkill -USR1 -x dd
19458+0 records in
19457+0 records out
19457 bytes (19 kB, 19 KiB) copied, 14.7165 s, 1.3 kB/s
19458 bytes if I write 2 bytes at a time, 19712 if I write 256 bytes at a time, and different values if I put the terminal in raw mode or include newlines in the data I send (as they get transformed to CRLFs).
In any case, I don't think that buffer size is customizable.
inotifywait
uses the inotify API to retrieve that list of events. In the inotify(7)
man page, you'll find:
The following interfaces can be used to limit the amount of kernel
memory consumed by inotify:
/proc/sys/fs/inotify/max_queued_events
The value in this file is used when an application calls
inotify_init(2) to set an upper limit on the number of events that
can be queued to the corresponding inotify instance. Events in
excess of this limit are dropped, but an IN_Q_OVERFLOW event is
always generated.
When inotifywait
is blocked on the write()
to standard output, it can't process the events put on that queue by the kernel. If that queue itself gets full, events are discarded.
On my system,
$ sysctl fs.inotify.max_queued_events
fs.inotify.max_queued_events = 16384
Now, when you do:
inotifywait -q -m ./ | cat
This time, we have a pipe in between inotifywait
and cat
and a pty between cat
and your terminal emulator.
pipes have a larger buffer than ptys (64KiB by default on Linux, though can be raised on a per-pipe basis up to fs.pipe-max-size
sysctl value (1MiB by default) using fcntl(fd, F_SETPIPE_SZ, newsize)
).
So before inotifywait
's write()
blocks, we need to fill up both those buffers. Plus, cat
will also have read some data in its own reading buffer, and waiting to write it itself.
For each | cat
you add, you add extra buffering space (at least 64KiB more).
With pv -q -B 1g
, pv
will buffer data internally.
Those cat
and pv
will be quicker at reading their input than your terminal emulator, because they need to do far less work to process it, but if inotifywait
is not quick enough to read/decode/format events, some can still be dropped.
To minimize the chance of events being dropped, you can:
- increase
fs.inotify.max_queued_events
- avoid sending
inotifywait
output to slow consumers or add sufficient buffering if you do
- tune
inotifywait
filters to only select events you're interested in.
- make sure
inotifywait
and the consumers of its output are not given a low priority (no nice
ing them).
Best Answer
As with any debugging and hacking, YMMV.