Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Large files: synchronization does not work with more than two clients #128

Open
albrechta opened this issue Feb 25, 2015 · 5 comments
Open

Comments

@albrechta
Copy link
Contributor

Large files are not synchronized correctly with more than two logged in clients.

Also, the process does not fail, but is stuck somewhere.

@nicoruti
Copy link
Contributor

Thanks for the big report. The large file download needs further stabilization. Will look into that asap.

@albrechta
Copy link
Contributor Author

We did not have time today to fully investigate it, but we did some tests and will also continue tomorrow (maybe it is possible to write a simple JUnit test which reproduces it on a local machine).

It works in a small scenario with only two peers connected to a router and both logged in (same user).

Scenario where it does not work:

  • login of 3 clients (same user), multiple peers (~8 physical machines)
  • client 1 adds a large file (e.g. 30MB). It will be added to the profile (metadata), i.e. the add operation succeeds.
  • client 2 and 3 start requesting chunks from client 1, which works partially (according to the logging output). However, there are chunks that are never requested and hence, no other client will have all chunks respectively the full file.
  • It seems that client 2/3 also try to download chunks from each other even though they do not have the chunks. In the end, they send each other DECLINED responses (AskForChunkStep / RequestChunkMessage). As a result, they try again, ... and so on. (Not entirely sure whether this is the reason why the process never stops, should stop after 3 retries I think, so there may also be an exception in a runnable that is not caught).

@nicoruti
Copy link
Contributor

I just improved the process for downloading large files. It needs further testing, but running your scenario on a single machine works fine now (tested with 100MB file). Will test with different machines in same LAN tomorrow.
If you have time, can you re-run the scenario with you setup (8 machines)?

@Cynthion
Copy link

I just saw the Git Large File Storage extension that can be used for Git. Maybe there is some inspiration in it that could help improve the H2H concepts. (https://git-lfs.github.com/?utm_source=github_site&utm_medium=blog&utm_campaign=gitlfs)

@pmalipio
Copy link

I've tried to upload and download a 1GB file and it still doesn't work. It doesn't work even with 30MB files. Is this issue solved? Will a fix be on the next release and do you have a roadmap for it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants