Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Seed a torrent #50

Closed
13 tasks done
vimpunk opened this issue Nov 14, 2020 · 1 comment · Fixed by #57
Closed
13 tasks done

Seed a torrent #50

vimpunk opened this issue Nov 14, 2020 · 1 comment · Fixed by #57
Labels
enhancement New feature or request

Comments

@vimpunk
Copy link
Owner

vimpunk commented Nov 14, 2020

Disk IO

  • read specified block from disk
  • read cache, read cache lines (pull in more blocks with one read)
  • handle concurrent requests for same block with "read faults"

Peer

  • handle request messages to upload block data
  • handle cancel message to cancel block read:
    • probably can't cancel in-progress disk IO
    • but shouldn't upload block that was cancelled.
    • use cancel buffer and check presence before uploading after successful disk read?
  • inbound connections:
    • peer session inbound constructor
    • handle inbound handhsakes
    • remove all checks that peer is not a seed (previously only downloading from seeds was supported)
    • if we have pieces, send bitfield message
  • send have message to non-seed peers

Torrent

Engine & CLI

Integration tests

Use cratetorrent for seeding and downloading as we already have test infrastructure to easily set up and verify cratetorrent downloads. As a next step we could have leech tests for e.g. transmission to ensure compatibility, but probably not before alpha release.

  • set up 1 cratetorrent peer to seed a file
  • set up 1 cratetorrent peer to download same file (can reuse existing test for the most part)
@vimpunk vimpunk added the enhancement New feature or request label Nov 14, 2020
@vimpunk vimpunk mentioned this issue Nov 14, 2020
24 tasks
@vimpunk
Copy link
Owner Author

vimpunk commented Nov 14, 2020

Disk reads

Currently the communication between peer session and the disk tasks is through an indirection: the peer session sends a request to disk, disk sends the result to torrent, torrent performs some bookkeeping and forwards it to peer.

This same mechanism is used for reading blocks and returning them to the peer session that requested it. This is for simplicity, as the communication infrastructure already exists.

Flow

  1. Peer sends request message
  2. Session handles it and issues BlockRead command to disk
  3. Disk reads block and additional cache_line_size blocks, places them in the read buffer, and returns them to peer's Torrent
  4. Torrent records number of block bytes read and other metadata, and forwards block to peer

There is one problem here: we need to identify the peer session to which torrent should forward the block.

Two solutions:

  • Session includes the peer's address (unique) in the read command, which is included in the response to torrent. Based on this, torrent can search for it in its session map and forward the message using its sender.
  • Peer makes a copy of its sender and passes that along with the command. Then, disk uses that channel to send the block.

There is slight communication overhead in the second solution as creating and destroying the peer's mpsc::UnboundedSender incurs an atomic increment and decrement in Arc 's refcount, but this may be acceptable (especially with below optimization). Moreover, having to perform two trips instead of one (disk -> torrent -> peer vs disk -> peer) would likely be slower anyway (benching needed).

The second solution is chosen for simplicity.

Upload optimization

Read as many requests from peer tcp socket as possible and batch block reads. Send batch of blocks to disk and await a list of blocks to return.

Disk IO uses a read cache but this would optimize away mpsc communication overhead.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant