-
Notifications
You must be signed in to change notification settings - Fork 343
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
docs(cli): cache hard limits flags #3846
Conversation
Add reference to the cache hard limits.
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #3846 +/- ##
==========================================
+ Coverage 75.86% 77.15% +1.28%
==========================================
Files 470 479 +9
Lines 37301 28674 -8627
==========================================
- Hits 28299 22122 -6177
+ Misses 7071 4657 -2414
+ Partials 1931 1895 -36 ☔ View full report in Codecov by Sentry. |
@@ -57,6 +57,14 @@ $ kopia cache set --max-list-cache-duration=300s | |||
Note the cache sizes are not hard limits: cache is swept periodically (every few minutes) to bring | |||
the total usage below the defined limit by removing least-recently used cache items. | |||
|
|||
A hard limit can be set if required via the corresponding `limit` flag: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we also add a side-note or a warning to this message that this setting hard limits can potentially increase the network activity ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That feels implied that if you have less of a cache you might have to fetch things more often.
Edit: if you've got details/a deeper insight into the impact though ... please by all means. I'm not a Kopia author, just trying to improve the documentation a bit as I learn things.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not a problem, we can watch for community usage and add this note later if need be. Thank you for your contribution!
I understand your point about this being obvious to users, so the smaller the limit the greater the network utilization expectation. I raised the concern because the impact has been perceived to be significant in some case. For example this issue:
We fixed the above problem BTW by allowing separate cache limits and defaults for each type of cache (list, index, metadata & content). I don't have the data yet, but I think playing with all of these limits can surface some interesting network behavior.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmmm... That's definitely interesting, thanks for sharing. That does seem excessive.
This comment in particular:
But for now you really need to allocate a big cache, our is 300GB (260GB for metadata, 40GB for data), for 300 snapshots of about 10GB each.
That seems like a strikingly large caching requirement.
No description provided.