It will eventually.
In:
cat /dev/random | strings --bytes 1 | tr -d '\n\t '
cat
will never buffer, but it's superfluous anyway as there's nothing to concatenate here.
< /dev/random strings --bytes 1 | tr -d '\n\t '
strings
though, since its output is not longer a terminal will buffer its output by blocks (of something like 4 or 8kB) as opposed to lines when the output goes to a terminal.
So it will only start writing to stdout when it has accumulated 4kB worth of characters to output, which on /dev/random
is going to take a while.
tr
output goes to a terminal (if you're running that at a shell prompt in a terminal), so it will buffer its output line-wise. Because you're removing the \n
, it will never have a full line to write, so instead, it will write as soon as a full block has been accumulated (like when the output doesn't go to a terminal).
So, tr
is likely not to write anything until strings
has read enough from /dev/random
so as to write 8kB (2 blocks possibly much more) of data (since the first block will probably contain some newline or tab or space characters).
On this system I'm trying this on, I can get an average of 3 bytes per second from /dev/random
(as opposed to 12MiB on /dev/urandom
), so in the best case scenario (the first 4096 bytes from /dev/random
are all printable ones), we're talking 22 minutes before tr
starts to output anything. But it's more likely going to be hours (in a quick test, I can see strings
writing a block every 1 to 2 blocks read, and the output blocks contain about 30% of newline characters, so I'd expect it'd need to read at least 3 blocks before tr
has 4096 characters to output).
To avoid that, you could do:
< /dev/random stdbuf -o0 strings --bytes 1 | stdbuf -o0 tr -d '\n\t '
stdbuf
is a GNU command (also found on some BSDs) that alters the stdio buffering of commands via an LD_PRELOAD trick.
Note that instead of strings
, you can use tr -cd '[:graph:]'
which will also exclude tab, newline and space.
You may want to fix the locale to C
as well to avoid possible future surprises with UTF-8 characters.
GnuPG consumes several bytes from /dev/random
for each random byte it actually uses. You can easily check that with this command:
start cmd:> strace -e trace=open,read gpg --armor --gen-random 2 16 2>&1 | tail
open("/etc/gcrypt/rngseed", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/dev/urandom", O_RDONLY) = 3
read(3, "\\\224F\33p\314j\235\7\200F9\306V\3108", 16) = 16
open("/dev/random", O_RDONLY) = 4
read(4, "/\311\342\377...265\213I"..., 300) = 128
read(4, "\325\3\2161+1...302@\202"..., 172) = 128
read(4, "\5[\372l\16?\...6iY\363z"..., 44) = 44
open("/home/hl/.gnupg/random_seed", O_WRONLY|O_CREAT, 0600) = 5
cCVg2XuvdjzYiV0RE1uzGQ==
+++ exited with 0 +++
In order to output 16 bytes of high-quality entropy GnuPG reads 300 bytes from /dev/random
.
This is explained here: Random-Number Subsystem Architecture
Linux stores a maximum of 4096 bytes (see cat /proc/sys/kernel/random/poolsize
) of entropy. If a process needs more than available (see cat /proc/sys/kernel/random/entropy_avail
) then the CPU usage becomes more or less irrelevant as the feeding speed of the kernel's entropy pool becomes the relevant factor.
Best Answer
If a little redirection is acceptable, then
pv
is a good way in general to achieve this type of thing, but GPG has (unsurprisingly)/dev/random
hard-coded into it, so that's not going to work here without some hackery. On linux, usingunshare
to temporarily overlay/dev/random
is probably the least disagreeable, though it requires root permissions :pv
will block until there's a reader on the fifo. Then as root or viasudo
:One obvious possible useful source of data is the random device driver itself (
drivers/char/random.c
). It supports a "debug" parameter, but sadly in the versions I've checked it's if-defined out (#if 0
, 2.6.x and 3.4.x), and has been removed completely in recent kernels in favour of ftrace support. The driver makes an ftrace call (trace_extract_entropy()
) each time data is read. For this, it seems overkill to me, as does systemtap, and the other tracing and debugging options (PDF).A simple (but unappealing to most) option is to use an injected library to wrap the relevant
open()
andread()
calls at the libc interface, similar to the solution to this question: Dynamic file content generation: Satisfying a 'file open' by a 'process execution' . If you wrapopen64()
are arrange for it to cache the descriptor when/dev/random
is opened you can log the size of eachread()
.To help get the entropy rolling in, I highly recommend asciipacman ;-)