I have a script converting video files and I run it at server on test data and measure its time by time
. In result I saw:
real 2m48.326s
user 6m57.498s
sys 0m3.120s
Why real time is that much lower than user time? Does this have any connection with multithreading? Or what else?
Edit: And I think that script was running circa 2m48s
Best Answer
The output you show is a bit odd, since real time would usually be bigger than the other two.
Real
time is wall clock time. (what we could measure with a stopwatch)User
time is the amount of time spend in user-mode within the processSys
is the CPU time spend in the kernel within the process.So I suppose if the work was done by several processors concurrently, the CPU time would be higher than the elapsed wall clock time.
Was this a concurrent/multi-threaded/parallel type of application?
Just as an example, this is what I get on my Linux system when I issue the
time find .
command. As expected the elapsedreal
time is much larger than the others on this single user/single core process.The rule of thumb is: