Thoroughly reread ddrescue
manual and found out the following option:
-m file
--domain-logfile=file
Restrict the rescue domain to the blocks marked as finished in the logfile file. This is useful if the destination drive fails during the rescue.
So the invocation of ddrescue
would look something like this:
# ddrescue -d -b 4096 -m sda1.log /dev/sda1 /mnt/sda1.img logfile2.log
Even though the question was asked 10 months ago, the answer might be relevant because the recovery cycle might still be running depending on a few factors! No pun intended.
The reason is that, time estimate is almost impossible, however sometimes you could get a rough idea as follows.
One of the most obvious reasons is that you can't predict how long it will take the drive to read a bad sector and if you want ddrescue to read and retry every single one, then it could take a very long time.
For example, I'm currently running a recovery on a small 500GB drive that's been going on for over 2 weeks and I possibly have a few days left.
But mine is a more complicated situation because the drive is encrypted and to read anything successfully, I have make sure to get all sectors that have partition tables, boot sectors and other important parts of the disk. I'm using techniques in addition to ddrescue to improve my chances for all the bad sectors. IOW, your unique situation is important in determining time to completion.
By estimate of "loops", if you mean number of retries then that's something you determine by the parameters you use. If you mean "total number of passes", that's easily determined by reading about the algorithm here..
>man ddrescue< / Algorithm: How ddrescue recovers the data
I'll specifically speak to the numbers in the screen captures you provided. Other situations may have other factors that apply, so take this information as a general guideline.
In the sample you've provided take a look at ddrescue's running status screen. We get the total "estimate" of the problem (rescue domain) by "errsize". This is the amount of data that is "yet to be read". In the sample it is 345GB.
Next line below to the right is "average rate". In the sample it is 583kb/s
If the "average rate" was to remain close to steady, this means you have 7 more days to go. 345 GB / (583 kb * 60*60*24) = 7.18
However the problem is that you can't rely on the 583kb/s. In fact deeper you go into recovery, the drive gets slower since it's reading more and more tougher areas and is doing more retries.
So the time to finish exponentially increases. All of this depends on how badly the drive is damaged.
The sample you've provided shows a "successful read" was over 10 hours ago. That's saying that it's not really getting anything from the drive for 10+ hours. This shows that your drive may have 345GB worth (or a portion) of data shot. This is very bad news for you.
In contrast, my second 500GB drive that had just started giving me "S.M.A.R.T" errors, was copied disk to disk (with log file on another drive) and the whole operation took about 8-9 hours. The last part of it slowed down. But that's still bearable. While the very bad drive, as noted above is well past 2 weeks working on 500GB and still has about 4-5 % remaining to recover.
HTH and YMMV
Best Answer
The manual gives you an example which is almost exactly what you want:
[emphasis mine]
The only difference is you want to write random data, so
/dev/urandom
instead of/dev/zero
.