Skip to content


Subversion checkout URL

You can clone with
Download ZIP
Browse files

Cleanup TODO file.

  • Loading branch information...
commit 6c48585e05a89fc5871d59e89a5832de0248cf6d 1 parent 8668448
@jlouis authored
Showing with 0 additions and 11 deletions.
  1. +0 −11
@@ -31,15 +31,7 @@ wish-list.
consider pieces we already have once and we get a faster system.
When doing this, only prune pieces which are done and checked.
- - The StatusP process is always fed static data. Feed it the correct data
- based on the current status: Are we a leecher or a seeder? And how much
- data is there left to download before we have the full file?
- (Hint: grep for canSeed and use the missingMap and pieceMap for the 'left'
- data)
- Send keepalives every two minutes as per the spec.
- - Make into a markdown document
- For the histogram code, look at
[Data.PSQueue]( Ralf
Hinze has a paper on that at [Hinze, R., A Simple Implementation
@@ -47,7 +39,6 @@ wish-list.
- Consider letting the supervisors support monitoring of processes. Use this to reimplement parts
of the PeerMgr code.
- Update the Seeder status in PeerMgrP.
- - When stopping a Peer, put back the Pieces to the Piece Manager.
- Do not send HAVE messages if the Peer already has the Piece Number.
- Improve on the command line parser. We will certainly need full-fledged
CL parsing at some point.
@@ -55,8 +46,6 @@ wish-list.
- Let Piece Sets be S.Set PieceNum rather than [PieceNum]. They are
larger than 1000 for some large torrents, so it makes sense to shift to
a better representation.
- - Cleanup the code around ChokeMgrP.advancePeerChain. It currently does a
- lot of stuff it doesn't have to do.
- The status reporting code needs some help. It only transfers up/down
rates once every 30 seconds. If a peer is living for less than 30
seconds, then no upload/download will be reported for that peer. The

0 comments on commit 6c48585

Please sign in to comment.
Something went wrong with that request. Please try again.