You can do this with Bash by using read and parameter substitution/expansion/splitting. The form is ${PARAMETER:OFFSET:LENGTH} where OFFSET is zero based. Save the following file as 'split', for example, then read each line by:
#!/usr/bin/env bash
# Usage: ./split "data.txt"
while IFS= read -r line
do
printf '%s\n' "${line:0:10}" >&3 # 1-10
printf '%s\n' "${line:10:14}" >&4 # 11-24
printf '%s\n' "${line:24:8}" >&5 # 25-32
printf '%s\n' "${line:32:22}" >&6 # 33-54
done < "$1" 3> output01.txt 4> output02.txt 5> output03.txt 6> output04.txt
# end file
Of course you may need to slightly adjust the positions above, but you can use this model for a lot of different types of file processing. The above positions will produce the output desired. A good reference (on parameter expansion) can be found at bash-hackers.org
As a postscript, after incorporating recommended practice improvements (see comments), keep in mind that for large files the Bash approach will not be efficient in terms of cpu time and cpu resources. To quantify this statement, I've prepared a brief comparison below. First create a test file (bigsplit.txt) of the data of the opening post that is 300,000 lines in length (16500000 bytes). Then compare split, cut and awk, where the cut and awk implementations are identical the StéphaneChazelas versions. The CPU time, in seconds, is the sum of the system and user CPU times, and the RAM is the maximum used.
$ stat -c %s bigsplit.txt && wc -l bigsplit.txt
16500000
300000 bigsplit.txt
$ ./benchmark ./split bigsplit.txt
CPU TIME AND RESOURCE USAGE OF './split bigsplit.txt'
VALUES ARE THE AVERAGE OF ( 10 ) TRIALS
CPU, sec : 88.41
CPU, pct : 99.00
RAM, kb : 1494.40
$ ./benchmark ./cut bigsplit.txt
CPU TIME AND RESOURCE USAGE OF './cut bigsplit.txt'
VALUES ARE THE AVERAGE OF ( 10 ) TRIALS
CPU, sec : 0.86
CPU, pct : 99.00
RAM, kb : 683.60
$ ./benchmark ./awk bigsplit.txt
CPU TIME AND RESOURCE USAGE OF './awk bigsplit.txt'
VALUES ARE THE AVERAGE OF ( 10 ) TRIALS
CPU, sec : 1.19
CPU, pct : 99.00
RAM, kb : 1215.60
The comparison follows where the best performance, cut, is assigned a value of 1:
RELATIVE PERFORMANCE
CPU Secs RAM kb
-------- ------
cut 1 1
awk 1.4 1.8
split (Bash) 102.8 2.2
No doubt that, in this case, cut is the tool to use for larger files. From rough, preliminary tests of Bash split above, the while read from file loop accounts for about 5 seconds of the CPU time, the parameter expansion accounts for about 8 seconds and the rest can be said to be related to printf to file operations.
Best Answer
Regarding 45000 digits number please note that maximum number which bash can handle is
[ 1 ] /usr/include/limits.h