Bash – measure amount of data read from /dev/random

bashfilesgpgrandomshell-script

background

I have written a collection of bash scripts (most of them in German only yet but if you are interested; download the archive not the single scripts) which help users create high-quality OpenPGP keys. These scripts are typically used in a "secure" environment (Linux live CD/DVD). This leads to the problem that these systems have hardly any entropy.

For obvious reasons gpg reads a lot of data from /dev/random which means that my poor users (at worst those with a SSD) have to type a lot on the keyboard in order to create the required entropy.

I have written a simple script which shows the users the current size of the entropy pool (which changes quickly between 0 and 64 while gpg reads data). I would like to also show a kind of progress bar so that the users can see that they have generated e.g. about 50% of the needed entropy. The required amount should always be (nearly) the same (until I change the key size).

question

So the question is: How can I (easily) measure the amount of data which has been read from /dev/random (by a certain process or the whole system)? The only idea I had up to now is attaching strace to gpg and trace the read()s from the respective file descriptor. But maybe there is a much better solution.

Best Answer

If a little redirection is acceptable, then pv is a good way in general to achieve this type of thing, but GPG has (unsurprisingly) /dev/random hard-coded into it, so that's not going to work here without some hackery. On linux, using unshare to temporarily overlay /dev/random is probably the least disagreeable, though it requires root permissions :

mkfifo $HOME/rngfifo
pv -s 300 /dev/random > $HOME/rngfifo

pv will block until there's a reader on the fifo. Then as root or via sudo:

unshare -m -- sh -c "mount --bind $HOME/rngfifo /dev/random && gpg --gen-key [...]"

One obvious possible useful source of data is the random device driver itself (drivers/char/random.c). It supports a "debug" parameter, but sadly in the versions I've checked it's if-defined out (#if 0, 2.6.x and 3.4.x), and has been removed completely in recent kernels in favour of ftrace support. The driver makes an ftrace call (trace_extract_entropy()) each time data is read. For this, it seems overkill to me, as does systemtap, and the other tracing and debugging options (PDF).

A simple (but unappealing to most) option is to use an injected library to wrap the relevant open() and read() calls at the libc interface, similar to the solution to this question: Dynamic file content generation: Satisfying a 'file open' by a 'process execution' . If you wrap open64() are arrange for it to cache the descriptor when /dev/random is opened you can log the size of each read().

To help get the entropy rolling in, I highly recommend asciipacman ;-)