Skip to content
This repository has been archived by the owner on Feb 1, 2023. It is now read-only.

Ordering response blocks #197

Closed
dirkmc opened this issue Sep 19, 2019 · 2 comments
Closed

Ordering response blocks #197

dirkmc opened this issue Sep 19, 2019 · 2 comments

Comments

@dirkmc
Copy link
Contributor

dirkmc commented Sep 19, 2019

Background

When a requesting node sends wants to a peer, it assigns a priority to each want.
Currently the priorities are assigned such that each time the requester sends a group of wants to a peer, the first want in the group is assigned the maximum allowed priority, and the priority is decremented for each subsequent block. eg if the maximum allowed priority were 100:

  • CID1: 100
  • CID2: 99
  • CID3: 98

The responding node gets the size of each wanted block from the blockstore, then organizes the wants into groups, such that the size of the group of wanted blocks is no greater than 256k. eg

  • Group1: { CID3: 100k, CID1: 110k }
  • Group2: { CID2: 105k }

Note that the composition of the group depends on the order the wants were added to the request (not the priority of the wants).

The responding node adds the want groups to a queue on a per-peer basis. Each peer's queue is ordered by priority then FIFO. The priority of a group is calculated by finding the highest priority of all wants in the group.
eg Group1 priorities are: { CID1: 100, CID3: 98 } so Group1 priority is 100

Peers are ordered such that

  • Peers with no wants come last
  • Peers with the least "active" wants come first
    An "active" want is one that is currently being processed (the block is being retrieved from the blockstore and sent to the requester).

For example:

  • Peer A: 2 active / 3 pending
  • Peer B: 0 active / 1 pending
  • Peer C: 0 active / 0 pending
  • Peer D: 3 active / 0 pending

In this example the order would be B, A, D, C

A worker task pops want groups off the queue, fetches the corresponding blocks from the blockstore, adds the blocks to a message and passes the message off to a worker pool which sends the message to the requesting peer.

Proposals

  1. The responding node knows the size of each block at the time when it is added to the queue. Peers are ordered by the number of active wants.
    Proposal: Order peers by the cumulative block size of the active wants.
  2. The responding node collects wants into groups, up to a maximum size. A worker task pops a whole group at a time off the queue. In the time between when the group is added and when the group is popped, the node may receive a block corresponding to one of the wants in the group.
    Proposal: Pop wants one-by-one from the queue up to the maximum message size.
  3. The responding node collects each peer's wants into groups arbitrarily, then orders the groups by the highest priority want in the group.
    Proposal: Order a peer's wants by priority regardless of group.
  4. The requesting node assigns priorities to wants starting at the maximum priority. Subsequent requests will have overlapping priorities.
    Proposal: Assign priority by FIFO.
  5. The worker task that pops wants from the queue and fetches the corresponding blocks from the blockstore is single-threaded.
    Proposal: Two limited-size worker pools:
    • Poppers: each worker pops enough wants off the queue to fill a message
    • Fetchers: each worker fetches a block from the block store
@Stebalien
Copy link
Member

Stebalien commented Sep 24, 2019

Better late than never (sorry). All these proposals sound like the right thing to do.

Proposal: Order peers by the cumulative block size of the active wants.

I.e., prioritize based on how much data we're actively sending to each peer? Yeah, that definitely sounds like like a good idea.

Proposal: Pop wants one-by-one from the queue up to the maximum message size.

👍. I still can't remember why we did it the current way. Your way makes more sense.

Proposal: Order a peer's wants by priority regardless of group.

👍

Proposal: Assign priority by FIFO.

That's how it should work. But you're right, that's clearly not how it works.

Proposal: Two limited-size worker pools:

👍

@dirkmc
Copy link
Contributor Author

dirkmc commented Mar 4, 2020

Implemented as part of the HAVE / DONT_HAVE changes

@dirkmc dirkmc closed this as completed Mar 4, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants