You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Seems to be fairly feasible with a small bit of cleanup, especially for components like the fetcher. Backpressure propagation would be as follows:
Upstream components (mainly the synchronizer) will be asked to have a defined limit on the number of parallel requests they make. They need not worry about timeouts, since the fetcher can now guarantee a response (albeit an error one).
Fetcher itself knows the capacity of each peer (via chainspec configuration and can balance accordingly), keeping an internal, unlimited queue (which is low on memory usage, since it only stores IDs).
Networking does no longer buffer any message beyond the buffers built into the transport itself (juliet).
Proper request-response handling is used (see sample code), resulting in .respond() to be called to answer requests coming, and requests being made are made solely through EffectBuilder::make_network_request.
Essentially this results in a respnse or error being associated with every request made, all enforced by the networking layer. Fetcher and higher up components need not concern themselves with these.
See the 4561-request-response-eval branch for some sample code.
Adding the first purely request-based message handling to the fetcher should lay out a blueprint how other components can be integrated post 1.6.
A follow-up issue should be created once this discovery is complete.
The text was updated successfully, but these errors were encountered: