The spawn command returns a readable stream, so you will need to pipe that to a writable stream to do something useful.
Piping to a file
// the filed module makes simplifies working with the filesystem
var filed = require('filed')
var path = require('path')
var spawn = require('spawn')
var outputPath = path.join(__dirname, 'out.txt')
// filed is smart enough to create a writable stream that we can pipe to
var writableStream = filed(outputPath)
var cmd = path.join(__dirname, 'my_script.bash')
var args = [] // you can option pass arguments to your spawned process
var child = spawn(cmd, args)
// child.stdout and child.stderr are both streams so they will emit data events
// streams can be piped to other streams
child.stdout.pipe(writableStream)
child.stderr.pipe(writableStream)
child.on('error', function (err) {
console.log('an error occurred')
console.dir(err)
})
// code will be the exit code of your spawned process. 0 on success, a positive integer on error
child.on('close', function (code) {
if (code !== 0) {
console.dir('spawned process exited with error code', code)
return
}
console.dir('spawned process completed correctly at wrote to file at path', outputPath)
})
You will need to install the filed module to run the example above
npm install filed
Piping to stdout and stderr
process.stdout and process.stderr are both writable streams so you can pipe the output of your spawned command directly to the console as well
var path = require('path')
var spawn = require('spawn')
var cmd = path.join(__dirname, 'my_script.bash')
var args = [] // you can option pass arguments to your spawned process
var child = spawn(cmd, args)
// child is a stream so it will emit events
child.stderr.pipe(process.stderr)
child.stdout.pipe(process.stderr)
child.on('error', function (err) {
console.log('an error occurred')
console.dir(err)
})
// code will be the exit code of your spawned process. 0 on success, a positive integer on error
child.on('close', function (code) {
if (code !== 0) {
console.dir('spawned process exited with error code', code)
return
}
console.dir('spawned process completed correctly at wrote to file at path', outputPath)
})
As for the cause, use strace
.
tail -f | strace bash >> foo
The second echo echo hello > pToB
gives me then this:
rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0
read(0, "e", 1) = 1
read(0, "c", 1) = 1
read(0, "h", 1) = 1
read(0, "o", 1) = 1
read(0, " ", 1) = 1
read(0, "h", 1) = 1
read(0, "e", 1) = 1
read(0, "l", 1) = 1
read(0, "l", 1) = 1
read(0, "o", 1) = 1
read(0, "\n", 1) = 1
write(1, "hello\n", 6) = -1 EPIPE (Broken pipe)
--- SIGPIPE {si_signo=SIGPIPE, si_code=SI_USER, si_pid=3299, si_uid=1000} ---
+++ killed by SIGPIPE +++
So, the second time it tries to write hello\n, it gets a broken pipe error; that's why you can't read hello (it was never written), and bash quits so that's the end of it.
You'd have to use something that keeps the pipe open, I guess.
How about this?
(while read myline; do echo $myline; done) < pToP
For more background information, man 7 pipe
may be relevant, it describes the various error cases around pipes.
Best Answer
It has to do with the closing of the file descriptor.
In your first example,
echo
writes to its standard output stream which the shell opens to connect it withf
, and when it terminates, its descriptor is closed (by the shell). On the receiving end, the shell, which reads input from its standard input stream (connected tof
) readsls
, runsls
and then terminates due to the end-of-file condition on its standard input.The end-of-file condition occurs because all writers to the named pipe (only one in this example) have closed their end of the pipe.
In your second example,
exec 3>f
opens file descriptor 3 for writing tof
, thenecho
writesls
to it. It's the shell that now has the file descriptor opened, not theecho
command. The descriptor remains open until you doexec 3>&-
. On the receiving end, the shell, which reads input from its standard input stream (connected tof
) readsls
, runsls
and then waits for more input (since the stream is still open).The stream remains open because all writers to it (the shell, via
exec 3>f
, andecho
) have not closed their end of the pipe (exec 3>f
is still in effect).I have written about
echo
above as if it was an external command. It's most likely is built into the shell. The effect is the same nonetheless.