Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Duplicate files may be skiped if they are part of the same json #1

Open
amiramix opened this issue Feb 10, 2016 · 0 comments
Open

Duplicate files may be skiped if they are part of the same json #1

amiramix opened this issue Feb 10, 2016 · 0 comments

Comments

@amiramix
Copy link
Owner

Application remembers downloaded files so that any new file with the same md5/sha1 as one of the already downloaded files should be skipped. However, when adding new files with the same md5/sha1 to the download queue and if that md5/sha1 isn't yet in the database, the duplicate files will be only detected if they are all are added to the download queue at the same time. A race condition is possible where duplicate files are not detected:

  1. Two or more files are in the same json but their md5/sha1 is not yet in the database
  2. The first file is added to the download queue and downloaded
  3. The second file is added to the download queue after the first one has been completed - the duplicate file is also downloaded

This should be rare if at all possible since files from the same json are added to the download queue roughly at the same time, as soon as their torrents have been downloaded. However, the downloading of torrents is asynchronous and hence a race condition is possible. The longer is the download queue the smaller is the chance that a duplicate file is downloaded before the other duplicate is added to the download queue.

A bigger issue is with only indexing the json rather than downloading it because then nothing is being added to the download queue. However even in this case the issue is only visible in logs because in any case only one file with the duplicated md5/sha1 will be stored in the database (the other duplicates are overwritten).

This could be fixed by checking any new download not only against the list of already downloaded files in the db but also against files currently being downloaded, e.g. every new download could be stored in ETS and then moved to Mnesia once the download has finished.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant