New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix storage limits #3066
Comments
|
@whyrusleeping if this is still open, I would love to take a crack at this? |
|
@lanzafame I don't believe this has been resolved. At the very least, we need to test it. If the test proves that its broken, then we need to fix it. Go ahead and give it a shot :) |
|
@whyrusleeping So I re-enabled the repo-gc-auto tests, and they still fail. |
|
I have been able to solve 12, but am still digging into 11. |
|
@whyrusleeping I have been digging around all |
|
@lanzafame hrm... I think @kevina or @Kubuxu might have an idea |
|
@whyrusleeping I am not that familiar with the code, should the auto-gc be triggered at all? I thought we removed that code. |
|
I'm pretty sure its still there, i believe @lgierth has it running on the gateways... |
This part in docs/config.md is currently inaccurate - the datastore will generally accept writes until the disk is full. The StorageMax value is used for:
So you need to run with |
|
So eventually we should make StorageMax a hard limit on the diskspace used, as it was meant in the beginning, but I think we should defer that to next year, since proper accounting of the diskspace used (without constantly scanning the repo) will actually be a bit tricky, and that's just one ball too much in the air right now :) @lanzafame do you think you could come up with better wording for the StorageMax docs though? |
|
@lgierth Just checking that I understand correctly, with the |
|
@lanzafame yep correct! |
|
Just a smaller user note on this one. We're working up instructions for folks to mirror our rather large software repos on IPFS. This creates an interesting situation. If the pin fails to finish, IPFS needs to "re-download" all chunks as it will garbage collect them before the next run. This is super frustrating. |
|
@kallisti5 note that almost any solution to the storage limits here is going to result in a problem in your scenario. This issue is mostly around StorageMax not being intuitive in that you can store more than the StorageMax. There are a many different models for resolving this including:
No matter how you slice it though in order to protect users' space there will exist some scenario where after a partial download automatic GC that could end up deleting data that isn't protected from GC. It sounds like what you are looking for is a best-effort-pin (#3121, but there may be other issues too) wherein you are trying to pin the data but don't need (or even want) all of it, for example a best effort wikipedia pin would protect any wikipedia data from GC but without downloading all of it. IIRC you can achieve this today with MFS by adding the root to MFS. In your case this would translate to adding the root to MFS and the pinning the rest of the data. However, in your case if a user has |
whyrusleeping commentedAug 9, 2016
we currently have code that is supposed to limit the size of an fsrepo, but i'm fairly certain it doesnt actually work. We need to go through and fix (and test this).
This is also (I beleive) related to the auto-gc code which i'm not sure actually works either.
Code links:
Notes:
The text was updated successfully, but these errors were encountered: