I'm trying to benchmark to compare two different ways of processing a file. I have a small amount of input data but in order to get good comparisons, I need to repeat the tests a number of times.
Rather than just repeating the tests I would like to duplicate the input data a number of times (eg 1000) so a 3 line file becomes 3000 lines and I can run a much more fulfilling test.
I'm passing the input data in via a filename:
mycommand input-data.txt
Best Answer
You don't need
input-duplicated.txt
.Try:
Explanation
0777
:-0
sets sets the input record separator (perl special variable$/
which is a newline by default). Setting this to a value greater than0400
will cause Perl to slurp the entire input file into memory.pe
: the-p
means "print each input line after applying the script given by-e
to it".$_=$_ x 1000
:$_
is the current input line. Since we're reading the entire file at once because of-0700
, this means the entire file. Thex 1000
will result in 1000 copies of the entire file being printed.