This works very well:
while true; do uptime >> uptime.log; sleep 1; done
This will log your cpu load every second and append it to a file uptime.log
.
You can then import this file into Gnumeric or the OpenOffice spreadsheet to create a nice graph (select 'separated by spaces' on import).
As Scaine noticed, this won't be enough to diagnose the problem. So, additionally, run this (or use his answer for this part):
while true; do (echo "%CPU %MEM ARGS $(date)" && ps -e -o pcpu,pmem,args --sort=pcpu | cut -d" " -f1-5 | tail) >> ps.log; sleep 5; done
This will append the Top 10 most CPU hungry processes to a file ps.log
every five seconds.
Note that this is not the full boat-load of information top
would give you. This is just the top 10, and just their CPU Usage, Memory Usage and the first argument (i.e. their command without further arguments, as in /usr/bin/firefox
)
After you've used a Spreadsheet to create a graph to see when your CPU load went through the roof, you can then search this file for the nearest time to see what process has caused it.
This is what those files will look like:
uptime.log
~$ cat uptime.log
22:57:42 up 1 day, 4:38, 4 users, load average: 1.00, 1.26, 1.21
22:57:43 up 1 day, 4:38, 4 users, load average: 0.92, 1.24, 1.21
22:57:44 up 1 day, 4:38, 4 users, load average: 0.92, 1.24, 1.21
22:57:45 up 1 day, 4:38, 4 users, load average: 0.92, 1.24, 1.21
...
ps.log
%CPU %MEM ARGS Mo 17. Jan 23:09:47 CET 2011
0.7 0.9 /usr/bin/compiz
0.8 0.5 /usr/lib/gnome-panel/clock-applet
1.1 1.7 /opt/google/chrome/chrome
1.2 0.3 /usr/bin/pulseaudio
1.8 4.0 /opt/google/chrome/chrome
2.6 1.5 /opt/google/chrome/chrome
2.6 3.2 /usr/bin/google-chrome
3.6 2.6 /opt/google/chrome/chrome
4.9 1.5 /usr/bin/X
5.7 1.6 /opt/google/chrome/chrome
%CPU %MEM ARGS Mo 17. Jan 23:09:48 CET 2011
0.7 0.9 /usr/bin/compiz
0.8 0.5 /usr/lib/gnome-panel/clock-applet
1.0 1.7 /opt/google/chrome/chrome
1.2 0.3 /usr/bin/pulseaudio
1.8 4.0 /opt/google/chrome/chrome
2.6 1.5 /opt/google/chrome/chrome
2.6 3.2 /usr/bin/google-chrome
3.6 2.6 /opt/google/chrome/chrome
4.9 1.5 /usr/bin/X
5.7 1.6 /opt/google/chrome/chrome
...
Oh, boy, I just caught this... quoting OP:
Furthermore, even after the process ends, the computer does not return to the previous performance. I found a way around this by running sudo swapoff -a
followed by sudo swapon -a
OK, so that means you were exhausting the available RAM on your system, which means you're just plain trying to run too many convert
processes at once. We'd need to look at your actual syntax in spawning convert
to advise, but basically, you need to make sure you don't try to open more simultaneous processes than you have the RAM to comfortably handle.
Since you state what's causing this is convert *.tif blah.pdf
, what's happening is that the content of every single TIF and its conversion to PDF are getting stuffed into RAM at once. What you need to do is split the job up so that this isn't necessary. One possibility that leaps to mind is instead doing something like find . -iname '*.tif' | xargs -I% convert % %.pdf
, then using pdftk
or something like it to glue all the individual pdfs together. If you really want to get fancy, and you have a multicore CPU, this also affords you the chance to write a small script to run conversions in batches of n, where n is your number of cores, and get the whole thing done significantly faster. :)
pdftk how-to: http://ubuntuhowtos.com/howtos/merge_pdf_files (basically boils down to sudo apt-get install pdftk; pdftk *.pdf cat output merged.pdf
)
Best Answer
You can create one-liner in shell:
to log process with pid=123 just:
or to see and write log to file:
If you want other data to be logged, modify
-o {this}
options. Seeman ps
section "STANDARD FORMAT SPECIFIERS" for available parameters to use. If you want different time resolution, changesleep {this}
in functionlogpid()
.