Shell – Efficient way to create multiple files

filesshelltouch

I have been testing find directory which is taking max inodes and while testing I had run

touch test_{1..1391803}.txt

But it's give me error "-bash: /usr/bin/touch: Argument list too long", now I'm running below command, but it's seems it will take Hugh time

ruby -e '1.upto(1391803) { |n| %x( touch "test_#{n}.txt" ) }'

So the question is : is there any way to create multiple files in small amount of time ? should I touch 1 lac files per loop or any better way ?

Test Result :

No. 1

[root@dc1 inode_test]# time seq 343409 | xargs touch

real    0m7.760s
user    0m0.525s
sys     0m4.385s

No. 2

[root@test-server inode_test]# time echo 'for (i=1;i<=343409;i++) i' | bc | xargs touch

real    0m8.781s
user    0m0.722s
sys     0m4.997s

No. 3

[root@test-server inode_test]# time printf '%s ' {1..343409} | xargs touch

real    0m8.913s
user    0m1.144s
sys     0m4.541s

No. 4

[root@test-server inode_test]# time awk 'BEGIN {for (i=1; i<=343409; i++) {printf "" >> i; close(i)}}'

real    0m12.185s
user    0m2.005s
sys     0m6.057s

No. 5

[root@test-server inode_test]# time ruby -e '1.upto(343409) { |n| File.open("#{n}", "w") {} }'

real    0m12.650s
user    0m3.017s
sys     0m4.878s

Best Answer

The limitation is on the size of the arguments upon execution of a command. So the options are to execute a command with fewer arguments, for instance with xargs to run smaller batches, increase the limit (ulimit -s 100000 on Linux), not execute anything (do it all in the shell), or build the list in the tool that creates the files.

zsh, ksh93, bash:

printf '%s ' {1..1391803} | xargs touch

printf is builtin, so there's no exec, so the limit is not reached. xargs splits the list of args passed to touch to avoid breaking the limit. That's still not very efficient as the shell has to first create the whole list (slow especially with bash), store it in memory, and then print it.

seq 1391803 | xargs touch

(assuming you have a seq command) would be more efficient.

for ((i=1; i<=1391803; i++)); do : >> "$i"; done

Everything is done in the shell, no big list stored in memory. Should be relatively efficient except maybe with bash.

POSIXly:

i=1; while [ "$i" -le 1391803 ]; do : >> "$i"; i=$(($i + 1)); done

echo 'for (i=1;i<=1391803;i++) i' | bc | xargs touch

awk 'BEGIN {for (i=1; i<=1391803; i++) {printf "" >> i; close(i)}}'
Related Question