-
Notifications
You must be signed in to change notification settings - Fork 197
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Api middlewares refactor #2092
Api middlewares refactor #2092
Conversation
- added reset call failure watchdog in middleware
- implemented closing on nodeFacade - updated node's json files
…etwork/elrond-go into api-middlewares-refactor
} | ||
|
||
select { | ||
case m.queue <- struct{}{}: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting way of achieving requests throttling. Could also have been achieved using an atomic counter perhaps and dropping the request immediately. If I understand correctly, this logic also drops the requests on default
, when the buffered channel is full. (I am thinking out loud).
"address": "erd12ytze3088nhdp45rkm4k7ycvqeprlc7mvm6duu52rtzggp0allasw53ndn", | ||
"supply": "1667291666666666666666666674", | ||
"balance": "1664791666666666666666666674", | ||
"address": "erd1ulhw20j7jvgfgak5p05kv667k5k9f320sgef5ayxkt9784ql0zssrzyhjp", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add in the PR description this change as well.
@@ -11,40 +11,40 @@ | |||
"adaptivity": false, | |||
"initialNodes": [ | |||
{ | |||
"pubkey": "153dae6cb3963260f309959bf285537b77ae16d82e9933147be7827f7394de8dc97d9d9af41e970bc72aecb44b77e819621081658c37f7000d21e2d0e8963df83233407bde9f46369ba4fcd03b57f40b80b06c191a428cfb5c447ec510e79307", | |||
"address": "erd1qqqqqqqqqqqqqpgq73ejktqxvhgem304gxgsfa5gtcguh8p3wt8sk7nsql" | |||
"pubkey": "cbc8c9a6a8d9c874e89eb9366139368ae728bd3eda43f173756537877ba6bca87e01a97b815c9f691df73faa16f66b15603056540aa7252d73fecf05d24cd36b44332a88386788fbdb59d04502e8ecb0132d8ebd3d875be4c83e8b87c55eb901", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add in the PR description this changes as well.
log.Trace("calling reset on WS source limiter") | ||
reset.Reset() | ||
select { | ||
case <-time.After(betweenResetDuration): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So this is a "hard" reset, which means from time to time we forgive everyone, which means that an eventual attacker, if starting right before the reset and continuing immediately after a reset, could send maxNumRequestsPerAddress * 2
requests in a short period of time (just thinking out loud, shouldn't be an issue, sliding mechanism not really required).
@@ -343,6 +346,14 @@ func (nf *nodeFacade) GetThrottlerForEndpoint(endpoint string) (core.Throttler, | |||
return throttlerForEndpoint, isThrottlerOk | |||
} | |||
|
|||
// Close will cleanup started go routines | |||
// TODO use this close method |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps add a Jira ticket for this, so that we don't forget?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
System tests passed.
c.Next()
on the gin context multiple times