To compile a software package on a workstation with many CPU cores (say 12), the configuration stage often takes much longer than the actual compilation stage because ./configure
does the tests one by one, while make -j
runs gcc
as well as other commands in parallel.
I feel that it is a huge waste of resources to have the remaining 11 cores sitting idle most of the time waiting for the slow ./configure
to complete. Why does it need to do the tests sequentially? Does each test depend on each other? I can be mistaken, but it looks like the majority of them are independent.
More importantly, are there any ways to speed up ./configure
?
Edit: To illustrate the situation, here is an example with GNU Coreutils
cd /dev/shm
rm -rf coreutils-8.9
tar -xzf coreutils-8.9.tar.gz
cd coreutils-8.9
time ./configure
time make -j24
Results:
# For `time ./configure`
real 4m39.662s
user 0m26.670s
sys 4m30.495s
# For `time make -j24`
real 0m42.085s
user 2m35.113s
sys 6m15.050s
With coreutils-8.9, ./configure
takes 6 times longer than make
. Although ./configure
use less CPU time (look at "user" & "sys" times), it takes much longer ("real") because it isn't parallelized. I have repeated the test a few times (with the relevant files probably staying in the memory cache) and the times are within 10%.
Best Answer
I recall discussions on the Autoconf mailing list about this issue from about 10 years ago, when most people actually only had one CPU core. But nothing has been done, and I suspect nothing will be done. It would be very hard to set up all the dependencies for parallel processing in
configure
, and do it in a way that is portable and robust.Depending on your particular scenario, there might be a few ways to speed up the configure runs anyway. For example:
dash
instead ofbash
as/bin/sh
. (Note: Under Debian,dash
is patched so thatconfigure
doesn't use it, because using it breaks a lot ofconfigure
scripts.)configure -q
.configure -C
. See the Autoconf documentation for details.config.site
). Again, see the documentation.