-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Static files in GCS have too long cache times #116
Comments
We hacked around that by changing the filename up until now (that The reason why I selected GCS is that when the file was served by the BE, it was quite slow. Also GCS bucket is multiregional, so it's also faster from wherever. Anyway, I added setting cache and making the files public via #122. Do you think it's enough? |
Thanks!
|
Maybe we should have an upload script that sets the cache params (with name, sharing etc.)? |
I was under an impression that we are going to update the files like once a day at most (and we are still quite far from that). And even if we did, I don't think users should know anything about us updating the model.
No :-( . Devs can use
We can, but you need proper permissions to be able to run that (that's why I put it in CI where the service account is ready). |
I'm closing this, I don't think there is a way around this for now. |
The GCS cache where the data files are stored has a default timeout of 1h - too much to do any experiments. Also will be problem later in production.
Now, when you update the data file (preserving the name), the cache keeps and serves the old data for 1h. Even if you delete it in GCS, the file is still served 🤷♂️
Individual files can get
Cache-control: public, max-age=10
, but needs to be done on every upload (or we get 1h trap). Can we somehow fix this in a better way? @hnykdaThe text was updated successfully, but these errors were encountered: