Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Need better architecture for transaction forwarding #4071

Closed
pgarg66 opened this issue Apr 29, 2019 · 9 comments

Comments

@pgarg66
Copy link
Contributor

commented Apr 29, 2019

Problem

The current transaction forwarding logic may not keep up with leader rotation, and in turn cause unnecessary load on network

Proposed Solution

  1. Forward the transactions to a node that's not the leader yet, but will be leader soon (within half a slot)
  2. That'll give that node some head start to pre-process the transactions and be ready to commit the transaction in it's leader slot
  3. This will "kind of" guarantee that the transaction will be processed by one of the node, and prevent unnecessary re-transmissions after first forward.
@pgarg66

This comment has been minimized.

Copy link
Contributor Author

commented Apr 29, 2019

@aeyakovenko Just FYI

@pgarg66 pgarg66 added this to the Silver Strand v0.15.0 milestone Apr 29, 2019

@aeyakovenko

This comment has been minimized.

Copy link
Member

commented Apr 29, 2019

The validator should retry on every block.

So check that the tx is still valid, and when a new leader starts a block send it to that leader until the tx appears in blocktree.

A tx is valid in the context of the currently proposed block.

@carllin

This comment has been minimized.

Copy link
Contributor

commented Apr 29, 2019

@aeyakovenko, so some clarifications:

  1. The first validator that gets a tx will own that transaction, and is in charge of forwarding it to every leader it detects until the transaction gets included in some block.

  2. Other validators that receive forwarded transactions should never retransmit them.

  3. Should the validator also ensure that the block be a descendant of the fork its currently voting on before removing that tx from its cache?

@aeyakovenko

This comment has been minimized.

Copy link
Member

commented Apr 29, 2019

@carllin *3 I don’t know how much that matters. We might have clients ask the validator to retry until N confirmations. That’s V2. I imagine leaders will slurp any transactions that they can from failed forks as well.

@carllin

This comment has been minimized.

Copy link
Contributor

commented Apr 29, 2019

@aeyakovenko, right now each in the forwarding logic there are 3 cases (@pgarg66 feel free to correct me here):

  1. The validator forwards the batch of transactions that they could not process in their leader slot.

  2. If a validator is not the leader but will be the next leader, they cache the transactions for processing later.

  3. Otherwise, drop the txs

The question is, what benefits do you see from having each validator "own" the forwarding responsibility for the set of txs they get from the client? In our current implementation, forwarding shouldn't snowball the number of unprocessed txs (you can only forward at most as many txs as you've received). One potential benefit I see with your approach is that you can distribute the responsibility of expiring txs and the network resources for forwarding batches of txs across many validators. Could you expand on that?

@aeyakovenko

This comment has been minimized.

Copy link
Member

commented Apr 29, 2019

@carllin because clients can ingress at any point without caring who the leader is. And validators can verify and batch everything, and leaders can QoS by stake weight.

@pgarg66

This comment has been minimized.

Copy link
Contributor Author

commented May 1, 2019

The client transaction should be forwarded to the leader for slot = (current slot + 1)

@pgarg66

This comment has been minimized.

Copy link
Contributor Author

commented May 7, 2019

Capturing discussion from our meeting today.

  1. We'll look into preprocessing the transactions on the nodes that are soon to be leader. This could help create a pipeline of transaction processing on the nodes that are not leader yet, and possibly squeeze out some performance gain.

  2. We'll hold off on implementing the design proposal for a node to own the client transaction (where it forwards the transaction to leader nodes until its processed or expired).

  3. As another optimization, a node can check the block hash of a transaction before forwarding it to other nodes. This will help filter out transactions that are too old and expired.

@pgarg66

This comment has been minimized.

Copy link
Contributor Author

commented May 20, 2019

Created the issue for pending work, as it is not a requirement for mainnet MVP

@pgarg66 pgarg66 closed this May 20, 2019

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
3 participants
You can’t perform that action at this time.