for i in {0..1000000}
and for i in $(seq 1000000)
both build up a big list and then loop over it. That's inefficient and uses a lot of memory.
Use:
for ((i = 0; i<= 1000000; i++))
instead. Or POSIXly:
i=0; while [ "$i" -le 1000000 ]; do
...
i=$(($i + 1))
done
Or:
seq 1000000 | xargs...
To get a file full of CRLFs:
yes $'\r' | head -n 1000000 > file
Generally, loops should be avoided when possible in shells.
What happens is that bash first expands *.djvu{,.bk}
into *.djvu
*.djvu.bk
, and then does glob-expansion on those. This would explain
what you observe: in your case, *.djvu
, matches an existing file,
say foo.djvu
and expands into that, but *.djvu.bk
matches no file,
and thus expands as itself, *.djvu.bk
.
The order of expansion is specified in the bash documentation:
The order of expansions is: brace expansion, tilde expansion, parameā
ter, variable and arithmetic expansion and command substitution (done
in a left-to-right fashion), word splitting, and pathname expansion.
I would suggest rewriting your copy command as:
for f in *.djvu; do cp -- "$f" "$f".bk; done
Or perhaps, to avoid the syntactic overhead of an explicit for loop:
parallel -j1 cp -- {} {}.bk ::: *.djvu
(On second thoughts... that's not really much shorter.)
To answer your sub-question "how could it be expanded", one could use
a sub-command (example in a directory containing just foo.djvu
and
bar.djvu
):
$ echo $(echo *.djvu){,.bk}
bar.djvu foo.djvu bar.djvu foo.djvu.bk
But that isn't as safe a solution as the for loop or parallel call
above; it will break down on file names containing white space.
Best Answer
You might use eval with
IFS=,; "${array[*]}"
(which joins the values with commas) or just two for loops: