dd – Determine the Optimal Value for the bs Parameter

ddfile-copyperformance

On occasion I've seen comments online along the lines of "make sure you set 'bs=' because the default value will take too long," and my own extremely-unscientific experiences of, "well that seemed to take longer than that other time last week" seem to bear that out. So whenever I use 'dd' (typically in the 1-2GB range) I make sure to specify the bytes parameter. About half the time I use the value specified in whatever online guide I'm copying from; the rest of the time I'll pick some number that makes sense from the 'fdisk -l' listing for what I assume is the slower media (e.g. the SD card I'm writing to).

For a given situation (media type, bus sizes, or whatever else matters), is there a way to determine a "best" value? Is it easy to determine? If not, is there an easy way to get 90-95% of the way there? Or is "just pick something bigger than 512" even the correct answer?

I've thought of trying the experiment myself, but (in addition to being a lot of work) I'm not sure what factors impact the answer, so I don't know how to design a good experiment.

Best Answer

dd dates from back when it was needed to translate old IBM mainframe tapes, and the block size had to match the one used to write the tape or data blocks would be skipped or truncated. (9-track tapes were finicky. Be glad they're long dead.) These days, the block size should be a multiple of the device sector size (usually 4KB, but on very recent disks may be much larger and on very small thumb drives may be smaller, but 4KB is a reasonable middle ground regardless) and the larger the better for performance. I often use 1MB block sizes with hard drives. (We have a lot more memory to throw around these days too.)

Related Question