fdup.py is a simple and fast program that finds duplicate files.
Because it is amazingly fast. Much faster than fdupes, which is written in C and much more readable than fslint/findup.
Python is not a limiting factor, but disc speed is. Therefore a sane algorithm to find/sort out potential duplicate files is much more important than the language used. In the end it is all about the algorithm and disc performance. Fstat, disc IO, hashing is in Python nearly as fast as in C, don't worry.
$ find $PWD -type f | ./fdup.py
or to exclude the time find needs:
$ find $PWD -type f > files.txt
$ ./fdup.py < files.txt
Testdirectory is my $HOME which contained 62022 files. There are 18680 duplicate files (empty files, duplicates from svn and git repos)