Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement bandwidth limiting #3065

Open
whyrusleeping opened this issue Aug 9, 2016 · 71 comments
Open

Implement bandwidth limiting #3065

whyrusleeping opened this issue Aug 9, 2016 · 71 comments
Labels
exp/expert Having worked on the specific codebase is important help wanted Seeking public contribution on this issue status/deferred Conscious decision to pause or backlog topic/libp2p Topic libp2p

Comments

@whyrusleeping
Copy link
Member

whyrusleeping commented Aug 9, 2016

We need to place limits on the bandwidth ipfs uses. We can do this a few different ways (or a combination thereof):

  • per peer limiting on each actual connection object
    • pros:
      • low coordination cost (no shared objects between connections)
      • should have lower impact on performance than blindly rate limiting the whole process
    • cons:
      • no flow control between protocols, dht could drown out bitswap traffic
  • per subnet limiting
    • pros:
      • avoids rate-limiting LAN/localhost connections.
    • cons:
      • it's not always possible to tell what's "local" (e.g., with IPv6).
  • per protocol limiting on each stream
    • pros:
      • should have the lowest impact on system performance of the three options
      • each protocol gets its own slice of the pie and doesnt impact others
    • cons:
      • increased coordination required, need to reference the same limits across multiple streams
      • still makes it difficult to precisely limit the overall bandwidth usage.
  • global limiting using a single rate limiter over all connections
    • pros:
      • will successfully limit the amount of bandwidth ipfs uses.
    • cons:
      • ipfs will be quite slow when rate limited in this way

Related Issues:

@whyrusleeping whyrusleeping added help wanted Seeking public contribution on this issue topic/libp2p Topic libp2p exp/expert Having worked on the specific codebase is important labels Aug 9, 2016
@whyrusleeping whyrusleeping added this to the Resource Constraints milestone Aug 9, 2016
@slothbag
Copy link

Here's two more related issues :)
#920
#1482

@k0d3g3ar
Copy link

This is critical if you want mass adoption. No one is going to risk their own local Internet connection bandwidth unless they can control it. That means using a 3rd party bandwidth limiter in front of IPFS which is just more complexity that isn't necessary.

@fiatjaf
Copy link

fiatjaf commented Sep 10, 2017

Perhaps using the alternative C by default with low limits, but when putting IPFS on an "active" state switch to A (or to no limit at all). The "active" state should be when the user is actively downloading, adding or pinning something and some time after that, or when he is using IPFS from some management GUI or JS app.

@EibrielInv
Copy link

EibrielInv commented Sep 12, 2017

I was thinking to implement (but never did), a script that alternates, every ~120 seconds between "offline" and "online" mode. It can also read the amount of connections, and restart the client when passes some threshold.
Something like:

  • Start client "online"
  • Wait 120 seconds
  • Kill client
  • Start client "offline"
  • Wait 120 seconds
  • Kill client
  • [Repeat]

@voidzero
Copy link

voidzero commented Jan 8, 2018

global limiting using a single rate limiter over all connections
cons:
ipfs will be quite slow when rate limited in this way

Global limiting has my vote. And I'm not sure if this con is true in all cases: bandwidth of course already has a hard limit (the limit of the connection). So if I already have a max of 20mbit down / 2mbit upload, and I limit ipfs to half of this, that is still a decent amount of bandwidth, isn't it?

@guybrush
Copy link

guybrush commented Mar 12, 2018

I think it would be best to do global limitation and then also limit per protocol relative to the global limit. For example let globalLimitUp = 1mbit/sec, globalLimitDown = 2mbit/sec and then every protocol gets its share of the available bandwidth depending on how important it is for ipfs to function properly.

Maybe i misunderstand the problem though, i just came here because i noticed the high use of bandwidth.

700 peers and 3.5 Mbps, both numbers climbing with no end? I am on win10 and ipfs@0.4.13 running the daemon with ipfs daemon --routing=dhtclient.

@Stebalien
Copy link
Member

@guybrush FYI, you can limit the bandwidth usage by turning off the DHT server on your node by passing the --routing=dhtclient flag to your daemon.

@hitchhiker
Copy link

This is essential, checking back on this. Without limiting, it's hard for us to package this in projects -> we can't expect end users to accept such a heavy bandwidth requirement.

@whyrusleeping
Copy link
Member Author

Please just add an emoji to the issue itself to add your support. Comments in this thread should be reserved for discussion around the implementation of the feature itself.

@jefft0
Copy link
Contributor

jefft0 commented Jun 5, 2018

I've been running an IPFS daemon for years without problems. But with the latest builds in the past couple weeks, I have a lot of delays in trying to load web pages or even ssh into another server. It's now at the point where I have to shut down the IPFS daemon to do some tasks. My stats are below. The bandwidth doesn't look so bad, so why does my network suddenly seem clogged?

$ for p in /ipfs/bitswap/1.1.0 /ipfs/dht /ipfs/bitswap /ipfs/bitswap/1.0.0 /ipfs/kad/1.0.0 ; do echo ipfs stats bw --proto $p && ipfs stats bw --proto $p && echo "---" ; done
ipfs stats bw --proto /ipfs/bitswap/1.1.0
Bandwidth
TotalIn: 1.1 MB
TotalOut: 6.1 kB
RateIn: 1.9 kB/s
RateOut: 0 B/s

ipfs stats bw --proto /ipfs/dht
Bandwidth
TotalIn: 41 kB
TotalOut: 3.2 kB
RateIn: 483 B/s
RateOut: 1 B/s

ipfs stats bw --proto /ipfs/bitswap
Bandwidth
TotalIn: 0 B
TotalOut: 0 B
RateIn: 0 B/s
RateOut: 0 B/s

ipfs stats bw --proto /ipfs/bitswap/1.0.0
Bandwidth
TotalIn: 0 B
TotalOut: 0 B
RateIn: 0 B/s
RateOut: 0 B/s

ipfs stats bw --proto /ipfs/kad/1.0.0
Bandwidth
TotalIn: 21 MB
TotalOut: 1.6 MB
RateIn: 164 kB/s
RateOut: 8.9 kB/s

@whyrusleeping
Copy link
Member Author

@jefft0 thats odd... those stats seem relatively normal. Are you seeing any odd cpu activity? what sort of bandwidth utilization does your OS report from ipfs? Also, how many connections does your node normally have?

Another question is, since you mentioned noticing this on recent builds, does running an older version of ipfs fix the problem?

@whyrusleeping
Copy link
Member Author

Also, cc @mgoelzer and @bigs, despite this being on the go-ipfs repo, this is definitely a libp2p issue. Worth getting on the roadmap for sure.

@jefft0
Copy link
Contributor

jefft0 commented Jun 6, 2018

I solved the problem by restarting my Internet router, restarting the computer, wiping the IPFS build directory and rebuilding the current version (but keeping my current ~/.ipfs folder). I know this wasn't very methodical, but I was desperate. Next time I have bandwidths problems I'll try to figure out which one of these causes the problem.

@whyrusleeping
Copy link
Member Author

@jefft0 interesting. Thats actually more helpful information than you could have provided, thanks

@whyrusleeping
Copy link
Member Author

Also, just so everyone watching this thread is aware, we have implemented a connection manager that limits the total number of connected peers. This can be configured in your ipfs config under Swarm.ConnMgr, see the config docs for more details.

@bigs
Copy link
Contributor

bigs commented Jun 6, 2018

Definitely a fan of the per-protocol limiting. Perhaps this could be handled with a weighting system? Assign weights to protocols and then set global settings (i.e. throttle after this amt of transfer per duration, halt all transfer after this limit within duration.)

@leshokunin
Copy link

Very cool to see progress! How's the bandwidth cap (eg: 50kb/s) coming along? It'd be super useful for our desktop client :)

@douglasmsi
Copy link

Are there news about this topic?

@Stebalien
Copy link
Member

Not at the moment. The current recommended approach is to limit bandwidth in the OS.

@lordcirth
Copy link

Last time I tried using Trickle with IPFS, it only limited the main thread, and all the other threads that used all the network traffic were unlimited. Is there a flag to get around that?

@calroc
Copy link

calroc commented May 15, 2020

@Stebalien cheers! I really want to use and promote IPFS and I sincerely believe this would help.

@lordcirth it was over a year ago that I last tried it. Something may have changed in the meantime, but back then IIRC back then Trickle did limit IPFS overall, not just the main thread.

@constantins2001
Copy link

I also would like to use IPFS in a P2P CDN, but as I'm unable to provide users with bandwidth limitation settings and this issue hasn't really progressed in years I think IPFS isn't a fit (sadly).

@Clay-Ferguson
Copy link

I read all the above comments but I'm still unsure what the final disposition of this issue was?

Here's my docker compose definition for IPFS, in case anyone familiar with docker has any input, or suggestions, and also in case it helps others:

 ipfs: 
        container_name: ipfs 
        environment:
            routing: "dhtclient" 
            IPFS_PROFILE: "server"
            IPFS_PATH: "/data/ipfs"
        volumes:
            - '${ipfs_staging}:/export'
            - '${ipfs_data}:/data/ipfs'
        ports:
            - "4001:4001"
            - "8080:8080"
            - "5001:5001"
        networks:
            - net-prod
        image: ipfs/go-ipfs:release

I'm shooting for the minimal viable low-bandwidth use case configuration with no swarms, just a single instance of everything. The above config seems to work just fine, but I'm unsure if it's using the least possible bandwidth, or not.

@Stebalien
Copy link
Member

Enabling the "lowpower" profile should help. That will disable background data reproviding, set really low connection manager limits, and put your node into dhtclient mode.

@bAndie91
Copy link

regarding to bandwidth limitation, have you considered limiting it externally (OS-level) ?
it'd need ipfs to mark connections according to what kind of traffic does it constitute (dht, bitswap, meta/data transfer, etc) , so traffic could be controlled by eg. tc under Linux. it'd limit bw adaptively, so unlike trickle it uses spare bw.

see this idea here: https://discuss.ipfs.io/t/limiting-bandwidth-at-os-level/9102

@lordcirth
Copy link

regarding to bandwidth limitation, have you considered limiting it externally (OS-level) ?
it'd need ipfs to mark connections according to what kind of traffic does it constitute (dht, bitswap, meta/data transfer, etc) , so traffic could be controlled by eg. tc under Linux. it'd limit bw adaptively, so unlike trickle it uses spare bw.

see this idea here: https://discuss.ipfs.io/t/limiting-bandwidth-at-os-level/9102

Some people just want to prevent slowing their connection, but others want to avoid hitting their bandwidth caps. So preferably we'd want support for both adaptive and capped bandwidth. It would be better UX to allow configuring bandwidth in IPFS, rather than 3 different sets of instructions for OS firewalls.

@bAndie91
Copy link

@lordcirth
actually offloading traffic control to an external component can be end up either in bw-capping or prioritizing or anything else depending on the external logic. my concern on provisioned bw is that it does not use idle capacity but still exerts pressure when bw limit reached.
although this 2 solutions can co-exist: basic users would user the embedded bw-capping settings and professional operators would setup their firewall with the help of ipfs packets marked according to ipfs traffic types.

@ciprianiacobescu
Copy link

ciprianiacobescu commented Dec 31, 2020

Good things are done by design( by professionals like IPFS devs ). IPFS is too awesome and useful to remain just a nice thing that experts/hacker use.
End-users expect things to work for real life conditions. Web3 will be a reality when each user has it's own ipfs node. - be it a phone, tablet, notebook ...

Just to mention https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0175360#sec008 may offer some pointers on how to deal with the problems on discovery, query and routing.

@christroutner
Copy link

I'm reviving this topic, as it seems to be a show stopper with regard to UX. I captured this bandwidth issue in a YouTube video, showing exactly what the issue is and why it's so detrimental to front end UX:

Solutions I've tried:

  • Limit the connections by setting LowWater at 20 and HighWater at 40.
  • Using the 'low-power' profile
  • Tried to block the ipfs.io that seem to be the source of the bandwidth, but was not successful.

Seems to me there are two possible solutions:

  • Create a blacklist filter to block nodes that push too much bandwidth.
  • Create a per-peer or overall bandwidth limit setting as part of the IPFS node software.

If anyone else has a proposed solution to this problem, I'm keen to try it.

I've also cross-posted this response on this IPFS discussion board thread.

@MysticRyuujin
Copy link

I'm just here for the comments, on this 5 year old issue....
image

@aschmahmann
Copy link
Contributor

@christroutner it looks like the issues you are running into are occurring with js-ipfs not go-ipfs. I'll put some thoughts on what you may want to look into with your js-ipfs nodes in the forum post.

@calroc
Copy link

calroc commented May 6, 2021 via email

@bachrc
Copy link

bachrc commented Dec 1, 2022

I am bumping this, because on low bandwidth connections like mine, it crashes my home network. And seems like Docker can't do the job.

@christroutner
Copy link

The only solution I've found, is to wrap Kubo with Trickle. I created this Docker container which does that. I'm able to set the maximum bandwidth as an environment variable in the docker-compose file.

This is less than ideal. It would be a much better user experience if these kinds of bandwidth limits could be set in the IPFS config file.

@paulgmiller
Copy link

paulgmiller commented Feb 5, 2023

Understand that this issue is deffered but I also noticed that /ipfs/id/push/1.0.0 seems to be doing the majority of the egress bandwidth (hand hence cost in my public cloud provider)

it doesn't seem that impacted by routingtype or highwater/lowlevel water mark. Here's an example of the first ~12 minutes of running the node.

-> % k exec zipfs-0 -- ipfs stats bw --proto /ipfs/id/push/1.0.0
Bandwidth
TotalIn: 479 kB
TotalOut: 66 MB
RateIn: 990 B/s
RateOut: 4.0 kB/s


-> % k exec zipfs-0 -- ipfs stats bw                            
Bandwidth
TotalIn: 128 MB
TotalOut: 97 MB
RateIn: 134 kB/s
RateOut: 123 kB/s

% k exec zipfs-0 -- ipfs config Swarm
{
  "AddrFilters": null,
  "ConnMgr": {
    "HighWater": 10,
    "LowWater": 5
  },
  "DisableBandwidthMetrics": false,
  "DisableNatPortMap": false,
  "EnableHolePunching": false,
  "RelayClient": {
    "Enabled": false
  },
  "RelayService": {},
  "ResourceMgr": {},
  "Transports": {
    "Multiplexers": {},
    "Network": {},
    "Security": {}
  }
}

> % k exec zipfs-0 -- ipfs config Routing
{
  "Methods": null,
  "Routers": null,
  "Type": "dhtclient"
}

@paulgmiller
Copy link

I eventually used resource manager to throttle this down soemwhat aggresively.

ipfs config --json Swarm.ResourceMgr.Limits.System.ConnsOutbound 20
ipfs config --json Swarm.ResourceMgr.Limits.System.StreamsOutbound 40
ipfs config --json Swarm.ResourceMgr.Limits.System.ConnsInbound 20
ipfs config --json Swarm.ResourceMgr.Limits.System.StreamsInbound 40

But it still seems strange that push protocol would use that much by default. Maybe I should create a different issue to limit just that protocol or optimize it.

@ShadowJonathan
Copy link

What is the push protocol?

@paulgmiller
Copy link

From what I can see the push protocol is actually mostly libp2p
https://github.com/libp2p/go-libp2p/blob/master/p2p/protocol/identify/id_push.go

From what I can tell it rebroadcasts node identity to all peers when its protocols or addresses change. This seems like it basically shouldn't be happening ever on my setup so somethings wrong. either addresses aren't stable on kubernetes there's a bug. Need to get debug information on when this broadcast happens.

@Clay-Ferguson
Copy link

Speaking of kubernetes, I think it might have rate-limiting, because I've researched this in the past (although not yet gotten my own project onto K8) and this link from 2019 came up when googling "kubernetes bandwidth limit". I'm currently running "docker swarm" myself, which I don't think can limit bandwidth. Just sharing this info in case it helps.

https://stackoverflow.com/questions/54253622/kubernetes-how-to-implement-a-network-resourcequota

@tx0c
Copy link

tx0c commented May 18, 2023

wonder is that ipfs stats bw really showing all bandwidth used by ipfs?

here we have an ipfs running in AWS, and suffering from the problem of ipfs using too high Network egress cost, have tried above ResourceMgr connection number limits, but it did not help much,
and have applied Linux OS level tc applied 512KBps on tcp port 4001, daily cap at ~50GB, but still have many IPNS failure to update

this following is from recent 1 day after restarted ipfs, counting from 0, although ipfs stats bw says Total Up is ~5.5GB but from AWS cloudwatch monitoring it says ~50GB NetworkOut were used, so where's the discrepancy from?
from Linux level tools like ifstat also showing 500KB egress, but why ipfs stats bw says Rate Up is ranging from 40 to 140KB, it differs a lot from observed at OS level;
(i'm sure there's no other application consuming internet, because if kill ipfs, then ifstat shows dropping to almost 0)

wonder is that ipfs stats bw really showing all bandwidth used by ipfs?

/ # ipfs stats bw --poll -i 5s
Total Up    Total Down  Rate Up     Rate Down
  5.5 GB       16 GB      142 kB/s    268 kB/s      

👍 on protocol limiting, would be the best

  • per protocol limiting on each stream

@Jorropo
Copy link
Contributor

Jorropo commented May 18, 2023

@tx0c ipfs stats bw is not tracking bandwidth usage by Kubo, it's tracking bandwidth usage for Kubo's protocol.
The key difference is that this does not count the libp2p overhead. Exchanging cryptographic keys and doing TLS handshake is surprisingly expensive. (remainder to move off from RSA if you are still using it somehow as it has keys which are orders of magnitude than ECC ones)

I did not thought about using stream shaping in Kubo before, we want to add configurable throttling but in our servers such as bitswap or DHT one, not as stream shaping (because the actual data sent on a stream is extremely poorly correlated to the work the server ends up doing, if you only care about transit costs then stream shaping would make sense I guess ?).

@dennis-tra
Copy link
Collaborator

Half a year ago, I did a very rough calculation for the handshake overhead. Seems relevant to share: https://pl-strflt.notion.site/Traffic-Back-Of-The-Envelope-Calculation-7cc525c9eccc4ec18e0a2e2e41b1f6ba

@tx0c
Copy link

tx0c commented May 19, 2023

okay, so the correct term is stream shaping and that is NEEDED, wonder there any ongoing effort toward this Implement bandwidth limiting #3065 ? it's 7 years since 2016

comparing with BitTorrent client, almost all BitTorrent client app have some kind of in-app bandwidth limiting configuration, and nobody would dare to run BitTorrent unbounded, which would kill all usable bandwidth,

when have many files / many ipns keys sharing, the IPFS on desktop is killing almost all bandwidth, on server on AWS means too high network traffic, means Network Cost is very expensive

@IngwiePhoenix
Copy link

Came here after having my home network crash twice. My modem just flatout rebooted - and it's a DrayTec Vygor 164 (or so - its in the 16 line and I keep forgetting the number).

As long as IPFS runs, my internet is basically unusable; it's deployed on OpenWrt and sits right behind the modem, with ports on ipv4 and ipv6 open and available. But I can not keep it running if it literally makes everything else - including my own Headscale VPN - unusable. And I am on a 100/20mbit/s link, so this should definitively hold somewhat.

Would love to hear about updates to this, thanks!

@christroutner
Copy link

The bandwidth consumption of go-ipfs was a constant thorn in my side. I've recently switched to Helia and my bandwidth issues have gone away.

If you're looking for a JS implementation, this is the best path to sidestep the bandwidth issues experienced when running ipfs-http-client to control a go-ipfs node.

@Ghost-chu
Copy link

I would like to embed Kubo in my App, but unfortunately it doesn't have a bandwidth limit, which would cause my app to take up too much of their bandwidth once it's distributed to the users and lead to complaints.
Bandwidth limiting is necessary.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
exp/expert Having worked on the specific codebase is important help wanted Seeking public contribution on this issue status/deferred Conscious decision to pause or backlog topic/libp2p Topic libp2p
Projects
No open projects
Development

No branches or pull requests