You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As I am sure the developers have also noticed, APIs have quotas - and developing 99.99% requires you to retry/resend the same API requests (getting the same responses) which unnecessarily uses up your quota.
In a production sense, it is also feasible that poor quality or high quality disparate input datasets will produce reoccurring domains and making an API request for the same data in short intervals is wasteful.
With these 3 uses, the data caching is actually a very critical missing feature.
There are many viable options to chose from;
Manage a cache of raw responses and wrap the HTTP library/adaptor to source these raw responses from the cache location depending on the file timestamp and a configuration TTL for the cached file.
Rely on HTTP headers like Last-Modified, E-Tag, 304 response code, from a HEAD method request (quota is usually only consumed on other http verbs).
Checksum files, store the hash in the db file, use this similarly to option 1.
Hope this helps.
The text was updated successfully, but these errors were encountered:
@caffix this is wonderful news - thank you!
I reviewed the diff and am not 100% sure but i don't think there was a way to specify where responses are save to (file system)?
It seems cache is pure graph database and responses are private to the app and users still cannot access their data from third party integrations..
I definitely agreed that making redundant requests to data sources was wasteful, but dropping the responses into files is messy. The command-line tool could be expanded to output data that has been cached. For now, I’m glad that subscriptions can be used more effectively
As I am sure the developers have also noticed, APIs have quotas - and developing 99.99% requires you to retry/resend the same API requests (getting the same responses) which unnecessarily uses up your quota.
In a production sense, it is also feasible that poor quality or high quality disparate input datasets will produce reoccurring domains and making an API request for the same data in short intervals is wasteful.
With these 3 uses, the data caching is actually a very critical missing feature.
There are many viable options to chose from;
Hope this helps.
The text was updated successfully, but these errors were encountered: