Entropy is fed into /dev/random
at a rather slow rate, so if you use any program that uses /dev/random
, it's pretty common for the entropy to be low.
Even if you believe in Linux's definition of entropy, low entropy isn't a security problem. /dev/random
blocks until it's satisfied that it has enough entropy. With low entropy, you'll get applications sitting around waiting for you to wiggle the mouse, but not a loss of randomness.
In fact Linux's definition of entropy is flawed: it's an extremely conservative definition which strives to achieve a theoretical level of randomness that's useless in practice. In fact, entropy does not wear out — once you have enough, you have enough. Unfortunately, Linux only has two interfaces to get random numbers: /dev/random
, which blocks when it shouldn't, and /dev/urandom
, which never blocks. Fortunately, in practice, /dev/urandom
is almost always correct, because a system quickly gathers enough entropy, after which point /dev/urandom
is ok forever (including uses such as generating cryptographic keys).
The only time when /dev/urandom
is problematic is when a system doesn't have enough entropy yet, for example on the first boot of a fresh installation, after booting a live CD, or after cloning a virtual machine. In such situations, wait until /proc/sys/kernel/random/entropy_avail
reaches 200 or so. After that, you can use /dev/urandom
as much as you like.
Because of the way the mount point gets hidden with umount -l
, there is no way to find which processes are still using affected files.
The only way to get the list is to use lsof
before umount -l
to grep the relevant path. Example: lsof | grep "/mountPoint/"
.
If you want, you can take that output to extract the PIDs and continue monitoring them.
Best Answer
The short answer is 0, because entropy is not consumed.
There is a common misconception that entropy is consumed — that each time you read a random bit, this removes some entropy from the random source. This is wrong. You do not “consume” entropy. Yes, the Linux documentation gets it wrong.
During the life cycle of a Linux system, there are two stages:
/dev/random
will block until it thinks it has amassed enough entropy;/dev/urandom
happily provides low-entropy data./dev/random
assigns a bogus rate of “entropy leek” and blocks now and then;/dev/urandom
happily provides crypto-quality random data.FreeBSD gets it right: on FreeBSD,
/dev/random
(or/dev/urandom
, which is the same thing) blocks if it doesn't have enough entropy, and once it does, it keeps spewing out random data. On Linux, neither/dev/random
nor/dev/urandom
is the useful thing.In practice, use
/dev/urandom
, and make sure when you provision your system that the entropy pool is fed (from disk, network and mouse activity, from a hardware source, from an external machine, …).While you could try to read how many bytes get read from
/dev/urandom
, this is completely pointless. Reading from/dev/urandom
does not deplete the entropy pool. Each consumer uses up 0 bits of entropy per any unit of time you care to name.