-
Notifications
You must be signed in to change notification settings - Fork 245
-
Notifications
You must be signed in to change notification settings - Fork 245
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement congestion control at set bandwidth limit #1266
Comments
Are you sure that tcp congestion control is applicable here? I never seen rate limiting implemented using congestion control. I am also unaware how app can control this window. I added rate limiting mechanism for whisper. Here: status-im/whisper#1. The goal is exactly to have hard-cap for resources that can be spent. With the ability to blacklist peers that send more data then we want from them. However this is implemented purely on the application side. Also, as i remember, there was no clear understanding where that traffic was coming from. |
Could also be interesting to be able to ask the mailserver for an estimate of how much data it will be returning us. That would allow us to ask the user (in case he's not on WiFi, for example) for authorization to download messages, maybe as an inline message in the chat. |
The point is there should be some way for endpoint to limit traffic, for example a la how BT client does it. IIRC mailserver uses TCP. If packets are dropped IIRC most TCP handling code would deal with it properly, since it's a design goal to make it a reliable transport even if packets are lost, out of order or delayed. |
i wasn't able to find any cases where congestion control was used to implement rate limiter. it seems that the common solution is to use flow control. application can slow down and keep receiver buffer full, which will make tcp to send necessary feedback to a sender. and sender will have to slow down. it can be added by wrapping all rlpx streams with single token bucket rate limiter. provided by juju/ratelimit, for example. the tricky part is to guarantee that it won't do more harm because of the mail server bursts of messages. i think mobile wants to get them as fast as possible. it might be easier with libp2p. they probably made some solutions already. cause ipfs itself should be quite resource hungry. but i still need to figure out discovery part. so we can try to implement it with patch to geth |
Sounds good, I think we need to figure out better semantics for mailserver re bursts.
That's what I was referring to - you are right, might've used the wrong terminology, the point is more that TCP is designed to deal with this (receiver buffer full, etc) |
Do we have any other sources of network traffic on mobile? |
It was during Hackathon and intermittent, @adambabik @pombeirp @jakubgs has more. The point of original issue IMO is more to make sure it can never happen again. |
I was implying that perhaps we are fetching data from other sources, e.g. infura or some block explorer. But i can reproduce spikes on mobile when i am fetching history. |
For context, what happened at the Hackathon was that whenever I managed to connect to a mailserver, I started seeing my downstream connection saturated (both on mobile and desktop). On Desktop, I checked with Wireshark and this was coming from the mailserver I was connected to. It probably had to do with the issues we were having with mailservers at the time (later identified to be a mismatch in the Per @oskarth's point, having some kind of rate limiting (or some other solution) would prevent that a bug like this would have the same impact in the future (e.g. consuming all the data plan of a mobile user). |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
keepalive cc @mandrigin |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
keepalive cc @iurimatias re bandwidth control Also see vacp2p/research#84 |
Problem
As we saw during hackathon, sometimes we get a lot of bandwidth for unexpected reasons (e.g. mailserver direct TCP connection). This is unacceptable from a user point of view, and people saw their mobile data plan being drained.
Implementation
Leverage TCP's built in congestion control and drop packets that go over a certain defined BW limit. E.g. 1 mb/s, or make this a setting that people can toggle (a la BT Clients, e.g. Transmission etc).
This may require some additional logic to ensure e.g. mailserver actually deals properly with dropped packets and implements basic congestion control (like sending slower). This should be verified so re-delivery is actually attempted.
Acceptance Criteria
Notes
Support for this should already be built in https://en.wikipedia.org/wiki/TCP_congestion_control
So more about counting incoming traffic and dropping it on the ground if it is above some threshold.
Hardcoding a BW limit and putting it under a feature flag to play around with seems reasonable.
Future Steps
The text was updated successfully, but these errors were encountered: