Summary
Sadly no.
Technical Details
What happened was that those files were fragmented, and once they were deleted, the cluster chain was removed, so when the programs "recovered" them, what they did was to look at the starting location (which is still present) and the size of the file (which is also still present) and simply copied that many clusters in a row from the start.
This works fine if the files are stored in a single, contiguous block (i.e., defragmented), but if they were fragmented, then their blocks are spread out around the disk and the program has absolutely no way to know where/which ones to use; that's why most of the corrupted recovered files will have at least one cluster's worth of correct data, but then contain whatever happened to be in the subsequent clusters that used to belong to other files.
If the files are plain-text, then you could search the drive for unused clusters (which is a nightmare with a giant, nearly empty disk) and manually stitch the file back together (I did this a few times many years ago). But with binary files, this is effectively impossible. In fact, even with plain-text files, it is difficult at best if the file had been getting edited and saved after changes numerous times because it then becomes difficult to identify the clusters that contain blocks of the last version of the file.
PhotoRec (and its ilk)
As you noticed, PhotoRec seems to recover more (at the cost of lost filenames). I'll explain.
The above explanation is how some data-recovery programs work. It is generally more reliable because it looks at real files that existed more recently. However (not surprisingly perhaps), it can miss out on some files. That is why other programs like PhotoRec use a different approach. Instead of looking at a deleted file's information (filename, size, timestamp, starting cluster) in directory entry and then copying the clusters from the disk, they search the whole disk for lost files.
Most file types have a signature (usually at the start of the file, in the header) which contains a sequence of bytes that identify the file as a certain type. Because of this, programs that open a file can determine if the file is teh correct type and other programs can verify the type of a file.
What some data-recovery programs do is to search the disk and check each cluster to see if they contain the signature of various different file types. If a cluster contains a signature, then it copies that cluster (and more depending on various factors) to a file.
This means that it can find some files that are not linked in any directories. That's good, but there are some downsides:
- Because it searches the disk directly instead of directory entries, it has no information about the file, so it applies a generic filename, and gives it the current date/time for the timestamp instead of the file's original one
- Because it has no information about the file, it does not know how big the file is supposed to be. Some (few?) filetypes indicate the exact size in the header, so most of the files that are recovered will, at best, be rounded up to the nearest cluster while others can end up being ridiculously huge (e.g., a 10x10 GIF file that is 1.7GB!)
- Like with the other data-recovery method, it has no way of recovering fragmented files and only copies contiguous (unused) clusters regardless of whether they belong to the file or not (check the files that PhotoRec recovered; plenty will be half-corrupt like the ones that Recuva recovered
- Because it is manually scanning the disk, it will "recover" a whole lot more files than programs that use the other method; many of these files are legitimately deleted files that may have been erased a long time ago, and they also come from all over the disk, not just a specific directory. This means a lot more clutter and more files that have to be examined and sorted through. The problem is that
Sympathy/Commiseration
I was in a similar situation to yours last year. I accidentally deleted ~9,000 graphic files from a volume that was nearly full (hence lots of fragmentation) I used a host of recovery programs that gave (sometimes vastly) different results. While I got a lot of files back, not surprisingly, many of them were corrupt and more than a year later, I'm still trying to sort through them and find which ones are bad.
Unfortunately, current file-systems still don't do much to enhance data-recovery, so losing files means a lot of manual work.
Advice
It doesn't help after losing files, but for future reference, the best way to increase the chances of a successful recovery is to keep the disk defragmented (have the system automatically defragment when it idles).
Best Answer
Correct. That is because
chkdsk
simply looks at the FAT and creates directory entries for orphaned FAT chains, so it has absolutely knowledge about file or folder information (they are orphaned after all).ChkBack then examines the contents of the files and simply gives them the proper extension for any file-types that it recognizes.
I was once designing my own file-identification program to be the mother-of-all such programs, but then I found TrID. It has support for numerous file-types and Marco keeps adding new ones (the current database was updated just five days ago).
Nope, the FAT is simply a chain and it cannot modify it without moving files around, which
chkdsk
does not do; it only creates new directory entries corresponding to the FAT chains.Did you try the other programs before running
chkdsk
? While runningchkdsk
did make changes, it did not overwrite any directory entries (unless they were in the root directory), so if the files were recoverable before, they should still be recoverable; sort of.The problem is that once
chkdsk
is run and creates directory entries for the orphaned FAT chains, they are no longer orphaned, so other programs will not think there is anything wrong with them (they appear as regular files and directories).You can still use a hex-/disk-editor to manually edit the drive and recover things (assuming they were not corrupted), but that is somewhat of an advanced task.
If you were a professional, truthfully yes, you would have been irresponsible. The first task for data-recovery is to prevent writing to the disk at all costs because you increase the likelihood of overwriting something and rendering it permanently lost. Professionals make a byte-for-byte copy of disk in question and work on that because if anything goes wrong, they can just make another copy. (In fact, they work on a copy of the copy because it is not always possible to copy the disk—like with physically damaged disks—so if you manage to get one copy, you should not edit that directly).
I tried a whole battery of programs in 2011 and the ones that I liked best were PhotoRec because it has the option of scanning for lost files in just the free space and Undelete360 because it was one of the most effective at detecting and recovering file-/folder-names. Both are free.
No data-recovery program will give perfect, effort-free results. They look at the directory entries that are marked as deleted and list everything they find. This means that the list will include a lot of stuff that was (legitimately) deleted a long time ago and are no longer valid. You will need to manually check each and every file to make sure that not only are they not corrupt, but that they are even real files (they could contain chunks of other files, thus making them completely gibberish).
You can use a disk-cloning program to create a disk image. Drive Snapshot is a great commercial program and DriveImage XML is a great free program. You can even use a hex-/disk-editor to manually copy all sectors to a file. There are two things to note when creating a disk image:
You must select the copy-all function of the program, otherwise it will examine the file-system and copy only clusters that are in-use, which for data-recovery is usually no good. You must check the option so that it copies all clusters, including unused ones (i.e., a true and full clone).
Most such programs have an option to compress the image to save space. This is a good and useful function, but it also means that you will not be able to view the image in a hex-editor and must restore it to a disk or at least mount as a drive in order to access the volume.