It will eventually.
In:
cat /dev/random | strings --bytes 1 | tr -d '\n\t '
cat
will never buffer, but it's superfluous anyway as there's nothing to concatenate here.
< /dev/random strings --bytes 1 | tr -d '\n\t '
strings
though, since its output is not longer a terminal will buffer its output by blocks (of something like 4 or 8kB) as opposed to lines when the output goes to a terminal.
So it will only start writing to stdout when it has accumulated 4kB worth of characters to output, which on /dev/random
is going to take a while.
tr
output goes to a terminal (if you're running that at a shell prompt in a terminal), so it will buffer its output line-wise. Because you're removing the \n
, it will never have a full line to write, so instead, it will write as soon as a full block has been accumulated (like when the output doesn't go to a terminal).
So, tr
is likely not to write anything until strings
has read enough from /dev/random
so as to write 8kB (2 blocks possibly much more) of data (since the first block will probably contain some newline or tab or space characters).
On this system I'm trying this on, I can get an average of 3 bytes per second from /dev/random
(as opposed to 12MiB on /dev/urandom
), so in the best case scenario (the first 4096 bytes from /dev/random
are all printable ones), we're talking 22 minutes before tr
starts to output anything. But it's more likely going to be hours (in a quick test, I can see strings
writing a block every 1 to 2 blocks read, and the output blocks contain about 30% of newline characters, so I'd expect it'd need to read at least 3 blocks before tr
has 4096 characters to output).
To avoid that, you could do:
< /dev/random stdbuf -o0 strings --bytes 1 | stdbuf -o0 tr -d '\n\t '
stdbuf
is a GNU command (also found on some BSDs) that alters the stdio buffering of commands via an LD_PRELOAD trick.
Note that instead of strings
, you can use tr -cd '[:graph:]'
which will also exclude tab, newline and space.
You may want to fix the locale to C
as well to avoid possible future surprises with UTF-8 characters.
GnuPG consumes several bytes from /dev/random
for each random byte it actually uses. You can easily check that with this command:
start cmd:> strace -e trace=open,read gpg --armor --gen-random 2 16 2>&1 | tail
open("/etc/gcrypt/rngseed", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/dev/urandom", O_RDONLY) = 3
read(3, "\\\224F\33p\314j\235\7\200F9\306V\3108", 16) = 16
open("/dev/random", O_RDONLY) = 4
read(4, "/\311\342\377...265\213I"..., 300) = 128
read(4, "\325\3\2161+1...302@\202"..., 172) = 128
read(4, "\5[\372l\16?\...6iY\363z"..., 44) = 44
open("/home/hl/.gnupg/random_seed", O_WRONLY|O_CREAT, 0600) = 5
cCVg2XuvdjzYiV0RE1uzGQ==
+++ exited with 0 +++
In order to output 16 bytes of high-quality entropy GnuPG reads 300 bytes from /dev/random
.
This is explained here: Random-Number Subsystem Architecture
Linux stores a maximum of 4096 bytes (see cat /proc/sys/kernel/random/poolsize
) of entropy. If a process needs more than available (see cat /proc/sys/kernel/random/entropy_avail
) then the CPU usage becomes more or less irrelevant as the feeding speed of the kernel's entropy pool becomes the relevant factor.
Best Answer
entropy_avail
does not indicate the number of bits available in/dev/random
. It indicates the kernel's entropy estimate in the RNG state that powers/dev/random
. That entropy estimate is a pretty meaningless quantity, mathematically speaking; but Linux blocks/dev/random
if the entropy estimate is too low.A program reading from
/dev/random
blocks until the value in/proc/sys/kernel/random/entropy_avail
becomes larger than/proc/sys/kernel/random/read_wakeup_threshold
. Reading from/dev/random
consumes entropy at the rate of 8 bits per byte.But anyway you shouldn't be using
/dev/random
. You should be using/dev/urandom
, which is just as secure, including for generating cryptographic keys, and which doesn't block. Generating random numbers does not consume entropy: once the system has enough entropy, it's good for the lifetime of the universe. The OS saves an RNG seed to a file, so once the system has had enough entropy once, it has enough entropy even after a reboot.The only cases where
/dev/urandom
is not safe are on a freshly-installed system booting for the first time, on a live system which has just booted (so generating cryptographic keys from a live system is not a good idea!), or on a freshly-booted embedded device that doesn't have either a hardware RNG or persistent memory. On such systems, wait until/dev/random
agrees to let out 16 bytes to make sure the entropy pool is built up. Then use/dev/urandom
.