New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding large directory (10k items) results in an error #1691
Comments
Indeed adding a directory with many files hits a |
So the latest versions of go-unixfs that we are use have a DynamicDirectory type which switches itself from BasicDirectory to HAMTDirectory (and back) depending on how many children have been added. When that switch happens, it reads all the children from the BasicDirectory to add them to the HAMTDirectory as new links. I believe we can patch go-unixfs to not re-read children and just re-use links in the BasicDirectory to add them to the HAMT directory. |
Basic testing suggests the problem is solved with ipfs/go-unixfs#120. |
The problem from the cluster side is that our DAGService is write-only and we cannot really read blocks that have written previously, since those blocks may have been written in the IPFS daemon of a completely different cluster peer. The assumptions for Adding is that we just write blocks in places, no need to re-read them. |
…ries Fix #1691: adding fails on large directories
Fixes #1691 by updating to the latest go-unixfs and adding a test. The test is verified to fail on the previous go-unixfs version.
Fixes #1691 by updating to the latest go-unixfs and adding a test. The test is verified to fail on the previous go-unixfs version.
A user reported that adding a directory with 10k items on cluster does not work well.
The text was updated successfully, but these errors were encountered: