New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
Storage limit support? #64
Comments
Hi! I like this feature. It can be tricky to implement since we support several storage targets and deployment environments. Do you have something in mind? Which storage provider are you using? |
Currently just testing this and thus using the local storage but would prefer using the s3 storage. Maybe instead of having the limit here you hand that over to the storage system the user is using. I have 2 conserns with the approach above:
An alternative would maybe be to have some sort of ttl on the cache entries and have a function that runs every half a day or so that fetches a list of all caches and check if any of them outlived the ttl and thus can be removed. |
We currently save the cache in an S3 bucket, and to clean up the old cache, we have set up a lifecycle rule at the bucket level. It is the easiest thing to do and all the major providers support it https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lifecycle-mgmt.html |
I am closing this for no further follow-up. |
馃殌 Feature Proposal
Is there a way I can limit the amount of storage that is used and if not is there any change this feature might get added?
Motivation
I only have a finite amount of cloud storage and I'm worried using this will quickly fill up all my cloud storage.
So I would like to set a limit to the amount of storage the cache can take up.
Example
I would like to have for example a remote cache with a maximum of 20 GiB
If that size is reached and new cache entries are added the oldest caches are removed.
Alternatives
I presume I can clear the cache myself and I would accept to do that once a week / month but there Is currently no documentation on this :(
The text was updated successfully, but these errors were encountered: