Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve latency handling #494

Merged
6 commits merged into from
Aug 11, 2022
Merged

Improve latency handling #494

6 commits merged into from
Aug 11, 2022

Conversation

ghost
Copy link

@ghost ghost commented Aug 5, 2022

Description

Improve latency handling to make the system more resilient

Fixes #498 #497 #499

Type of change

Please delete options that are not relevant.

  • Bug fix (non-breaking change which fixes an issue)

How Has This Been Tested?

To test with high latency, we can use a tool like comcast and with some latencies:

comcast --device=lo0 --latency=200 --target-bw=1000 --packet-loss=10%
  • We should not have timeouts in the ReplicateTransactionChain
  • We should not have timeouts in the self-repair

Checklist:

  • My code follows the style guidelines of this project
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes
  • Any dependent changes have been merged and published in downstream modules

@ghost ghost added the bug Something isn't working label Aug 5, 2022
@ghost ghost self-assigned this Aug 5, 2022
Samuel added 2 commits August 5, 2022 16:11
GenStage multi calls can be a bottleneck
This one is to try to leverage direct process spawning for each message
@apoorv-2204 apoorv-2204 self-requested a review August 5, 2022 14:20
@apoorv-2204
Copy link
Contributor

# TODO: Provide a better waiting time management

we can also leverage this, as Mentioned by Samuel.

# TODO: Provide a better waiting time management
          # for example rolling percentile latency could be way to achieve this
          # (https://cs.stackexchange.com/a/129178)

@ghost ghost added the P2P Involve P2P networking label Aug 8, 2022
@ghost ghost force-pushed the handle_latencies branch 4 times, most recently from 34e82ed to 390e75b Compare August 9, 2022 09:28
@ghost ghost marked this pull request as ready for review August 9, 2022 09:28
@ghost ghost changed the title Handle latencies Improve latency handling Aug 9, 2022
@ghost ghost requested review from prix-uniris and removed request for apoorv-2204 August 9, 2022 09:38
@@ -59,6 +59,7 @@ defmodule Archethic.P2P.Message do
alias __MODULE__.RegisterBeaconUpdates
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A better way for import can be like this syntax : alias MODULE.{ModuleName1, ModuleName2, etc.}.
https://elixir-lang.org/getting-started/alias-require-and-import.html#multi-aliasimportrequireuse

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this can be improved for the entire codebase, so the scope of the PR would be too big I guess.
But yes you are right, it might be better.

Copy link
Contributor

@prix-uniris prix-uniris left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Few typos and improvements can be made.

Samuel added 4 commits August 9, 2022 14:10
Renaming `from_map` to `cast` remove the confusion with the function
`to_map` which is used by the api while the `cast` is used by the db to
convert map to struct.
@apoorv-2204 apoorv-2204 requested review from apoorv-2204 and removed request for apoorv-2204 August 10, 2022 05:46
Copy link
Contributor

@prix-uniris prix-uniris left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tested with comcast with 2 nodes locally, worked fine.
comcast --device=lo --latency=200 --target-bw=1000 --packet-loss=10%
LGTM

@ghost ghost merged commit 0d14214 into develop Aug 11, 2022
@ghost ghost deleted the handle_latencies branch August 11, 2022 07:07
@ghost ghost mentioned this pull request Aug 11, 2022
This pull request was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working P2P Involve P2P networking
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants