Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Excessive resource usage up to crash during pinning #2988

Closed
ghost opened this issue Jul 23, 2016 · 4 comments
Closed

Excessive resource usage up to crash during pinning #2988

ghost opened this issue Jul 23, 2016 · 4 comments
Labels
kind/bug A bug in existing code (including security flaws) topic/bitswap Topic bitswap topic/perf Performance

Comments

@ghost
Copy link

ghost commented Jul 23, 2016

Version/Platform/Processor information (from ipfs version --all):

Basically 0.4.3-rc1

> ipfs version --all
20:37:44.910  INFO   cmd/ipfs: IPFS_PATH /data/ipfs main.go:297
go-ipfs version: 0.4.3-dev-566c08e
Repo version: 4
System version: amd64/linux
Golang version: go1.6.3

Type (bug, feature, meta, test failure, question): bug
Area (api, commands, daemon, fuse, etc): bitswap
Priority (from P0: functioning, to P4: operations on fire): 2

Description:

I tried syncing the pins on the gateways, and one particular object keeps reproducibly crashing uranus.i.ipfs.io. Within a minute or two it goes up to 200k goroutines and eventually fails to allocate memory.

> ipfs pin add QmRn43NNNBEibc6m7zVNcS6UusB1u3qTTfyoLmkugbeeGJ
@ghost ghost added kind/bug A bug in existing code (including security flaws) topic/bitswap Topic bitswap topic/perf Performance labels Jul 23, 2016
@whyrusleeping
Copy link
Member

great, disk contention on the writing provider entries to disk. I'll see if i cant find a clever way to batch these

@Kubuxu Kubuxu modified the milestones: ipfs-0.4.3, ipfs-0.4.3-rc2 Jul 24, 2016
@ghost
Copy link
Author

ghost commented Jul 28, 2016

I'm not sure this is a side effect of writing providers to disk. Yesterday I tried master with the write-providers-to-disk commit reverted (solarnet/v0.4.3-downgrade branch), and it still had this issue. I also tried the latest master after the merge of #2993, and it also showed the same excessive resource usage.

I'm not sure this is a regression in v0.4.3-rc1, but rather a general problem with big object graphs, whatever big means in the case of the hash I mentioned in the opening comment. Should we bump it off rc2?

@whyrusleeping
Copy link
Member

yeah, lets bump it off rc2

@Kubuxu Kubuxu removed this from the ipfs-0.4.3-rc2 milestone Jul 28, 2016
@whyrusleeping whyrusleeping added the status/deferred Conscious decision to pause or backlog label Sep 14, 2016
@ghost
Copy link
Author

ghost commented Mar 7, 2017

Resource usage is bounded now: stable goroutines graph

@ghost ghost closed this as completed Mar 7, 2017
@ghost ghost removed the status/deferred Conscious decision to pause or backlog label Mar 7, 2017
This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug A bug in existing code (including security flaws) topic/bitswap Topic bitswap topic/perf Performance
Projects
None yet
Development

No branches or pull requests

2 participants