-
Notifications
You must be signed in to change notification settings - Fork 381
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can't continiously donwload from slow peers #64
Comments
This is a good observation. However, tracking individual block requests and responses would be an overkill. How about increasing default timeout to 2 minutes? |
But actually it did, but not use. Just replace
with long duration = System.currentTimeMillis() - max(started, checked);
and
with shouldAssign = !timeoutedPeers.containsKey(peer);
and for more accurate timeout tracking move bt/bt-core/src/main/java/bt/torrent/messaging/PieceConsumer.java Lines 117 to 122 in 77315df
to line 64: bt/bt-core/src/main/java/bt/torrent/messaging/PieceConsumer.java Lines 60 to 66 in 77315df
and finally replace
with this.maxPieceReceivingTime = Duration.ofSeconds(10);
(and rename PS all my changes that is related to this issue can be seen in my fork: |
BTW, is it ok, that during |
On the surface, this looks good. But there might be a problem with time-tracking individual blocks in that the sender will not always send the requested blocks with equal intervals. Like we request several blocks at a time (by sending multiple requests from the request queue), the sender may optimize as well and send all requested blocks at once, and this may happen well after the individual block receival timeout has elapsed. I.e. compare the following timelines of receiving blocks from two peers:
Average download rate is the same for both peers (peer Y may even be slightly faster than peer X by sending the blocks after 11 seconds), but with your changes the second peer (Y) will be considered to timeout.
Yeah, I think it's OK. BEP-3 requires to send Have messages to all connected peers, because the most important thing is to keep the view of the state of the swarm up-to-date for everybody. |
I think we can just replace the existing option with a new relative metric: |
Metric : And use it to calculate |
This is basically the same thing as "max amount of time to receive one unit of data", but differently worded and, when expressed in code directly (for instance, as some
This is just an approximation, so I don't think it's incredibly useful to constantly perform this recalculation... |
It is more optimal to track smaller amount of data, because it makes possible to quickly detect inactivity. |
Optimal from what perspective? Is quickly detecting inactivity of one of the many active peers really that important? I'd rather say that the opposite is true: quickly punishing for eventual pauses is detrimental in the longer run |
Instead of short-term micromanagement we eventually would like to maintain long-term statistics (might even persist those between different sessions) for all encountered peers and prefer to assign pieces to peers that proved to be more responsive and timely (while still having some reasonable but not too punishing cap on the receiving time) |
For real-time streaming applications it is very important. One inactive peer can stop streaming.
Just notice: If whole channel bandwidth is utilized by all downloads, than it make sense to download blocks from peer one-by-one. Also, it is preferred behaviour for real-time streaming (while peer get task in block scope, not in piece scope). Anyway, I plan to implement the simultaneous loading of a piece by several peers in the normal mode (like in endgame, but without random() and duplicates). |
Timeout used for whole piece, while it is more appropriate to use it for a small block.
bt/bt-core/src/main/java/bt/runtime/Config.java
Line 83 in a37ca29
Default timeout is
30 sec
. Let piece size is4 MiB
. In that case peer, that has download speed less than4 MiB / 30 sec = 136 KiB/sec
will be disconnected by timeout.The text was updated successfully, but these errors were encountered: