Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Process SubmitBlock requests in parallel #357

Merged
merged 10 commits into from
Dec 18, 2023

Conversation

tiram88
Copy link
Collaborator

@tiram88 tiram88 commented Dec 13, 2023

Following changes to the gRPC server:

  1. When the node BPS is greater than 1 (ie. 10 BPS in testnet-11), processes SubmitBlock requests in parallel with as many workers as BPS.
  2. Rejects incoming blocks when the SubmitBlock handler queue is full.

rpc/grpc/server/src/adaptor.rs Outdated Show resolved Hide resolved
rpc/grpc/server/src/connection.rs Outdated Show resolved Hide resolved
rpc/grpc/server/src/connection.rs Outdated Show resolved Hide resolved
@coderofstuff
Copy link
Collaborator

Fixes #355

@michaelsutton michaelsutton merged commit 6658164 into kaspanet:master Dec 18, 2023
6 checks passed
smartgoo pushed a commit to smartgoo/rusty-kaspa that referenced this pull request Jun 18, 2024
* Add some properties to gRPC server methods

* Apply method properties to requests handling

* Let the service have a full Config and rename bsp to network_bps

* Store routing policies inside the routing map

* When a SubmitBlock fails because the route is full, drop the block and report the accurate reason to the client

* While processing a submitted block, report the new block sequentially to the mempool

* On SubmitBlock failure, send a response with both the reason and an error message

* Add a drop fn to methods with DropIfFull routing policy

* Embed the drop fn into the DropIfFull variant
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants