-
-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
IPFS Client OOMKill #7954
Comments
Thank you for submitting your first issue to this repository! A maintainer will be here shortly to triage and review.
Finally, remember to use https://discuss.ipfs.io if you just need general support. |
2021-05-24 next step: see is can reproduce this locally. |
Hey guys, any updates on this? I'm facing similar issues in a highly concurrent environment. |
@tsoeiroecore : are you able to provide a reproducible case? |
Closing due to no response with a reproducible case. |
Version information:
go-ipfs version: 0.7.0-ea77213
Repo version: 10
System version: amd64/linux
Golang version: go1.14.4
Description:
When trying to add multiple files in parallel (or even adding lots of single files in sequence for a long period of time) the IPFS clients is being oomkilled. The memory keeps growing until it reaches the limit (in my case 5GiB), and the client is killed.
I've tried a couple of suggestions from this issue #3532 without much luck.
ipfs config --bool Swarm.DisableBandwidthMetrics true
--enable-gc
--routing=dhtclient
How to Reproduce:
Generate files
for i in {1..100}; do dd if=/dev/random of=file_$i count=1 bs=102400000; done
Add files through API
for i in {1..100}; do curl -XPOST -F file=@file_$i "http://localhost:5001/api/v0/add"& done
Or directly on the CLI (faster)
for i in {1..100}; do ipfs add file_$i & done
Depending on your machine configurations (disk, memory etc) the oomkill might happen even when sending 100 x 10MB files.
This seems like a problem on how the memory is managed by IPFS.
The text was updated successfully, but these errors were encountered: