Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Get rid of Hash::dummy from BinaryCacheStore #3935
Get rid of Hash::dummy from BinaryCacheStore #3935
Changes from all commits
9fbc31a
412b3a5
3f226f7
c40c832
5db83dd
1832436
25fffdd
6c31297
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is beyond the scope of this PR, but since we're revisiting these store API methods it's worth noting that we could optimize away the tmpfile if we know a little bit more in advance. Although optimizing away a tmpfile seems unimpressive, it changes the time taken from
O(sum(steps))
toO(max(steps))
, which is significant when compression and upload take similar amounts of time.When the file is known to be small (low hanging fruit)
Add a size parameter to
addToStoreCommon
or use a fancy sink that only writes to file when it crosses a limit. This is similar to whatLocalStore::addToStoreFromDump
does.When we know the nar hash in advance
For http binary caches this does require us to change the binary cache filenames to match uncompressed hashes, which seems to be equivalent and can only result in one-time duplication in existing caches when new paths are uploaded.
I don't know yet how IPFS caches fit into this picture, but if those can compress after hashing, this would be beneficial.
Another reason to do this is so we don't need to compress before we can decide to reuse an available nar file.
In this case it does make sense to have both
addToStore(const ValidPathInfo & info, .....)
andaddToStore(....., std::function<ValidPathInfo(HashResult))
where the prior can have a default implementation in terms of the latter.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I decided not to try to optimize away the temporary file for small NARs because the overhead is likely to be insignificant compared to stuff like HTTP requests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't need to parse the NAR to determine the end if we make the caller responsible for ending
narSource
. That's whataddCAToStore
is doing.nix-store --import
comes to mind. It will have to parse the NAR because the import/export format doesn't have a way to determine the end by simpler means.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think agree it's better to make the caller responsible, but I'm a bit wary on changing the direction of this FIXME as it and this code already existed, I just moved it here.
I'll let @edolstra decide :).