Yep. That's not a disk issue. My suggestions / theories:
FileVault could be a reason for slowdown, and a pretty big one if your disk is very cluttered. Even with FileVault off, disk clutter can slow down your computer if it's excessive, specially on startup (though not when you're typing). I consider "disk clutter" to be about 80-90% of your disk space used.
RAM could be a factor too, especially if you're running memory intensive apps. Your manual should have some more info on how much RAM your MacBook can have: up to what you can expand it (perhaps 8 GB). When your RAM gets full (you can see how full it is in Activity Monitor), it starts transferring memory pages from RAM to your HDD, which is considerably slower than the lightning fast RAM. Perhaps this could be an issue, because when your apps need that cache, it needs to load it from a slower source.
And last, but probably the biggest: your apps. Check if any apps are eating your CPU in the background using Activity Monitor (Applications > Utilities > Activity Monitor). Also, check on System Preferences for any apps loading when you login (Accounts > Login Items tab). Try to get rid of some background apps you might not need. Flash is a CPU hog, and it has happened to me that it slows my whole web browser down to the point where even typing is slow.
If you are facing a healthy file system at the level of its structure and want to find files which have disk faulty blocks, here is how I would proceed:
Make a full backup of your disk with Time Machine
or Carbon Copy Cloner
Check this backup.
Run the following heavy and risky (in case you do have bad blocks outside of your filesystem structure) command (make sure the {} is quoted so filenames containing spaces work):
find / -type f -print -exec dd if="{}" of=/dev/null bs=1m \;
This heavy find
command will print for any plain file its name (thus not reading it, but just its directory entry) and then continue making a full and fast read of all its data blocks.
Upon hiting the first file containing bad blocks, this find
will cause the kernel to log read error
on /var/log/system.log
, and it will either slow down or bring your system to a total halt.
This will mostly depend on the hard drive capacity to relocate the bad blocks found on its internal pool dedicated to this usual fix task.
This file containing bad blocks will be the last name printed by find
.
Write down this file name on a piece of paper!
Let's say that this file name is:
/.DocumentRevisions-V100/.cs/ChunkStorage/0/0/0/9
At this point you may have the possibility to kill find
quickly by hiting ctrl+C. If killing it nicely is failing, just crash your Mac.
Upon rebooting your Mac, directly check the file containing bad blocks:
dd if='/.DocumentRevisions-V100/.cs/ChunkStorage/0/0/0/9' of=/dev/null bs=1m
If the command terminate correctly, then the error was light enough for your disk to be able to read this file and reallocate the bad blocks.
- If the command doesn't terminate, you won't be able to kill it
normally, your data is totally lost, and you will have to crash your Mac
once more.
In this last case, you have to consider replacing your disk and to work from your last backups. Some other files might also contain bad blocks and may have stayed undetected since a long time as long as you didn't read them.
The kernel won't fire a read error on a block you never read.
Best Answer
After fiddling with making my own solution via
fs_usage
manipulation, I discovered the DTraceiotop
script (man iotop
). This really seems to do the trick with regards to giving samples of filesystem activity at nice updated intervals and quickly allows one to isolate possibly misbehaving processes.*NOTE: you must run it with
sudo
or asroot
user.For example:
Options: