How to replace all duplicate files with hard links

deduplicationfilesystemshardlink

I have two folders containing various files. Some of the files from the first folder have an exact copy in the second folder. I would like to replace those with a hard link. How can I do that?

Best Answer

I know of 4 command-line solutions for linux. My preferred one is the last one listed here, rdfind, because of all the options available.

fdupes

  • This appears to be the most recommended/most well-known one.
  • It's the simplest to use, but its only action is to delete duplicates.
  • To ensure duplicates are actually duplicates (while not taking forever to run), comparisons between files are done first by file size, then md5 hash, then bye-by-byte comparison.

Sample output (with options "show size", "recursive"):

$ fdupes -Sr .
17 bytes each:                          
./Dir1/Some File
./Dir2/SomeFile

hardlink

  • Designed to, as the name indicates, replace found files with hardlinks.
  • Has a --dry-run option.
  • Does not indicate how contents are compared, but unlike all other options, does take into account file mode, owner, and modified time.

Sample output (note how my two files have slightly different modified times, so in the second run I tell it to ignore that):

$ stat Dir*/* | grep Modify
Modify: 2015-09-06 23:51:38.784637949 -0500
Modify: 2015-09-06 23:51:47.488638188 -0500

$ hardlink --dry-run -v .
Mode:     dry-run
Files:    5
Linked:   0 files
Compared: 0 files
Saved:    0 bytes
Duration: 0.00 seconds

$ hardlink --dry-run -v -t .
[DryRun] Linking ./Dir2/SomeFile to ./Dir1/Some File (-17 bytes)
Mode:     dry-run
Files:    5
Linked:   1 files
Compared: 1 files
Saved:    17 bytes
Duration: 0.00 seconds

duff

  • Made to find files that the user then acts upon; has no actions available.
  • Comparisons are done by file size, then sha1 hash.
    • Hash can be changed to sha256, sha384, or sha512.
    • Hash can be disabled to do a byte-by-byte comparison

Sample output (with option "recursive"):

$ duff -r .
2 files in cluster 1 (17 bytes, digest 34e744e5268c613316756c679143890df3675cbb)
./Dir2/SomeFile
./Dir1/Some File

rdfind

  • Options have an unusual syntax (meant to mimic find?).
  • Several options for actions to take on duplicate files (delete, make symlinks, make hardlinks).
  • Has a dry-run mode.
  • Comparisons are done by file size, then first-bytes, then last-bytes, then either md5 (default) or sha1.
  • Ranking of files found makes it predictable which file is considered the original.

Sample output:

$ rdfind -dryrun true -makehardlinks true .
(DRYRUN MODE) Now scanning ".", found 5 files.
(DRYRUN MODE) Now have 5 files in total.
(DRYRUN MODE) Removed 0 files due to nonunique device and inode.
(DRYRUN MODE) Now removing files with zero size from list...removed 0 files
(DRYRUN MODE) Total size is 13341 bytes or 13 kib
(DRYRUN MODE) Now sorting on size:removed 3 files due to unique sizes from list.2 files left.
(DRYRUN MODE) Now eliminating candidates based on first bytes:removed 0 files from list.2 files left.
(DRYRUN MODE) Now eliminating candidates based on last bytes:removed 0 files from list.2 files left.
(DRYRUN MODE) Now eliminating candidates based on md5 checksum:removed 0 files from list.2 files left.
(DRYRUN MODE) It seems like you have 2 files that are not unique
(DRYRUN MODE) Totally, 17 b can be reduced.
(DRYRUN MODE) Now making results file results.txt
(DRYRUN MODE) Now making hard links.
hardlink ./Dir1/Some File to ./Dir2/SomeFile
Making 1 links.

$ cat results.txt 
# Automatically generated
# duptype id depth size device inode priority name
DUPTYPE_FIRST_OCCURRENCE 1 1 17 2055 24916405 1 ./Dir2/SomeFile
DUPTYPE_WITHIN_SAME_TREE -1 1 17 2055 24916406 1 ./Dir1/Some File
# end of file