I am not quite understanding the difference between /dev/random
and /dev/urandom
on Linux systems.
What does it mean to run out of “entropy” and how does the system get more?
What does it mean when people say /dev/random
“blocks” if there isn’t enough entropy?
Which one should I use for what scenarios?
Best Answer
Randomness means the next value you get has no dependency on the previous value and there is no way for you to predict it.
This is actually hard for a computer to do, since a computer is pretty much just a really fast calculator - so it can do math, but will always get the exact answer every time. You can do something close to randomness with math called "pseudorandomness" - but it's not high quality enough to be used for cryptography.
So Linux collects "randomness" in pools from various sources (such as timing between input events). The "amount" of randomness in this pool is the entropy. Less entropy = less regular, repeating, predictable patterns - you want as much entropy as possible. The Linux kernel will "fill" its pool with entropy when it gets low, but it depends on what's happening on the system since it uses timing between unpredictable hardware events to generate it.
If the pool is empty,
/dev/random
will block, or stop giving out data, until the kernel gets enough entropy./dev/urandom
will keep going - using pseudorandom techniques to generate random numbers.Now that you got the basics down, you can always use urandom and here is why.
Here's an excerpt from that article explaining why it doesn't matter: