Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement congestion control at set bandwidth limit #1266

Closed
oskarth opened this issue Nov 6, 2018 · 14 comments
Closed

Implement congestion control at set bandwidth limit #1266

oskarth opened this issue Nov 6, 2018 · 14 comments

Comments

@oskarth
Copy link
Contributor

oskarth commented Nov 6, 2018

Problem

As we saw during hackathon, sometimes we get a lot of bandwidth for unexpected reasons (e.g. mailserver direct TCP connection). This is unacceptable from a user point of view, and people saw their mobile data plan being drained.

Implementation

Leverage TCP's built in congestion control and drop packets that go over a certain defined BW limit. E.g. 1 mb/s, or make this a setting that people can toggle (a la BT Clients, e.g. Transmission etc).

This may require some additional logic to ensure e.g. mailserver actually deals properly with dropped packets and implements basic congestion control (like sending slower). This should be verified so re-delivery is actually attempted.

Acceptance Criteria

  1. Hard limit of BW consumption.
  2. Ideally toggleable.
  3. Ensure something like mailserver still works with redelivery, but perhaps worse latency.

Notes

Support for this should already be built in https://en.wikipedia.org/wiki/TCP_congestion_control

So more about counting incoming traffic and dropping it on the ground if it is above some threshold.

Hardcoding a BW limit and putting it under a feature flag to play around with seems reasonable.

Future Steps

![](/static/favicon/wikipedia.ico)
[TCP congestion control - Wikipedia](https://en.wikipedia.org/wiki/TCP_congestion_control)
@oskarth
Copy link
Contributor Author

oskarth commented Nov 6, 2018

@dshulyak
Copy link
Contributor

dshulyak commented Nov 6, 2018

Are you sure that tcp congestion control is applicable here? I never seen rate limiting implemented using congestion control. I am also unaware how app can control this window.

I added rate limiting mechanism for whisper. Here: status-im/whisper#1. The goal is exactly to have hard-cap for resources that can be spent. With the ability to blacklist peers that send more data then we want from them. However this is implemented purely on the application side.

Also, as i remember, there was no clear understanding where that traffic was coming from.

@pedropombeiro
Copy link
Contributor

Could also be interesting to be able to ask the mailserver for an estimate of how much data it will be returning us. That would allow us to ask the user (in case he's not on WiFi, for example) for authorization to download messages, maybe as an inline message in the chat.

@oskarth
Copy link
Contributor Author

oskarth commented Nov 6, 2018

The point is there should be some way for endpoint to limit traffic, for example a la how BT client does it.

IIRC mailserver uses TCP. If packets are dropped IIRC most TCP handling code would deal with it properly, since it's a design goal to make it a reliable transport even if packets are lost, out of order or delayed.

@dshulyak
Copy link
Contributor

dshulyak commented Nov 8, 2018

i wasn't able to find any cases where congestion control was used to implement rate limiter. it seems that the common solution is to use flow control. application can slow down and keep receiver buffer full, which will make tcp to send necessary feedback to a sender. and sender will have to slow down. it can be added by wrapping all rlpx streams with single token bucket rate limiter. provided by juju/ratelimit, for example. the tricky part is to guarantee that it won't do more harm because of the mail server bursts of messages. i think mobile wants to get them as fast as possible.

it might be easier with libp2p. they probably made some solutions already. cause ipfs itself should be quite resource hungry. but i still need to figure out discovery part. so we can try to implement it with patch to geth

@oskarth
Copy link
Contributor Author

oskarth commented Nov 8, 2018

Sounds good, I think we need to figure out better semantics for mailserver re bursts.

keep receiver buffer full

That's what I was referring to - you are right, might've used the wrong terminology, the point is more that TCP is designed to deal with this (receiver buffer full, etc)

@dshulyak
Copy link
Contributor

dshulyak commented Nov 9, 2018

Do we have any other sources of network traffic on mobile?
Desktop client (on linux) receives about ~2.5 kb/sec and writes 1.5 kb/sec. I have maybe ~20 channels. Full node with discovery (statusd) and 4 peers consumed 4 kb/sec and 6 kb/sec.
I got this number from running nethogs, which is pretty accurate. How it could be that mobile device was receiving 1 mb/sec of traffic?

@oskarth
Copy link
Contributor Author

oskarth commented Nov 11, 2018

It was during Hackathon and intermittent, @adambabik @pombeirp @jakubgs has more. The point of original issue IMO is more to make sure it can never happen again.

@dshulyak
Copy link
Contributor

I was implying that perhaps we are fetching data from other sources, e.g. infura or some block explorer. But i can reproduce spikes on mobile when i am fetching history.

@pedropombeiro
Copy link
Contributor

For context, what happened at the Hackathon was that whenever I managed to connect to a mailserver, I started seeing my downstream connection saturated (both on mobile and desktop). On Desktop, I checked with Wireshark and this was coming from the mailserver I was connected to. It probably had to do with the issues we were having with mailservers at the time (later identified to be a mismatch in the shh_RequestMessages request format that might have caused us to be requesting a huge window of messages).

Per @oskarth's point, having some kind of rate limiting (or some other solution) would prevent that a bug like this would have the same impact in the future (e.g. consuming all the data plan of a mobile user).

@status-github-bot
Copy link

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@oskarth
Copy link
Contributor Author

oskarth commented Feb 18, 2019

keepalive

cc @mandrigin

@status-github-bot status-github-bot bot removed the stale label Feb 18, 2019
@status-github-bot
Copy link

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@oskarth
Copy link
Contributor Author

oskarth commented Aug 6, 2021

keepalive cc @iurimatias re bandwidth control

Also see vacp2p/research#84

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants