Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Simple cache #106

Closed
wants to merge 5 commits into from
Closed

Simple cache #106

wants to merge 5 commits into from

Conversation

sjvrijn
Copy link
Contributor

@sjvrijn sjvrijn commented Mar 11, 2019

Adding a simple LRU cache to the api.others.toggl() function that dispatches the API calls.
Can be disabled using configuration option cache_requests, and size can be set with cache_size.
Cache is cleared in all functions that dispatch a PUT, POST or DELETE API call.

@sjvrijn
Copy link
Contributor Author

sjvrijn commented Mar 11, 2019

Relevant to #86, but as mentioned there: this PR is not intended to close it yet.

Copy link
Owner

@AuHau AuHau left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Generally looks good! There needs to be a better way how to support turning on/off the caching as mentioned in my comment.

Also, I am a bit worried about one thing. If by any chance API responds with some error message, this response then will be cached even though it should not be and should be retried later on. Hence we should cache only 200 responses and to be even more concrete we should cache only GET requests because otherwise, we might get problems with any modifications requests being ignored.
Unfortunately, it seems that the lru_cache has very limited support for such operations. So I am bit afraid that we should look into some more advanced caching options. Do you know about any?

@@ -175,3 +179,8 @@ def toggl(url, method, data=None, headers=None, config=None, address=None):

# If retries failed then 'e' contains the last Exception/Error, lets re-raise it!
raise exception


if Config.factory().cache_requests:
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm I am afraid we can't do this. Config.factory() will always give you the default ~/.togglrc configuration, which might not be desired for people who build their own Config instance that they pass to the API Wrappers.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm we could wrap instead of toggl() the _toggl_request() function and then in toggl() decide based on the passed config if used the cached version or instead non-cached version using the ability to bypass the cache using _toggl_request. __wrapped__() attribute that lru_cache expose. This then can be also used to ignore cache for POST/DELETE/PUT requests.

But still, it leaves us with the problem of caching error response...

@AuHau
Copy link
Owner

AuHau commented Mar 12, 2019

Also, it will be needed to write test coverage for this to make sure that we are not missing anything.

@sjvrijn
Copy link
Contributor Author

sjvrijn commented Mar 15, 2019

Thanks for the feedback!
The _toggl_request() cannot be cached using the default lru_cache, as it takes a non-hashable input. Also, you raise a good point for not wanting to cache any non-200 responses, making it difficult to use the lru_cache anyway.
I'll have a look at replacing it with a more custom caching solution. After all, Python's lru_cache is also simply a dictionary. Would you be okay with integrating the caching in e.g. the toggl() function? I wouldn't know how to integrate a different run-time config object into a decorator-style wrapper.

@AuHau
Copy link
Owner

AuHau commented Jun 1, 2020

Hey,
since there is no progress on this PR, I am gonna close it for now. If you decide to come back to it, feel free to reopen.

@AuHau AuHau closed this Jun 1, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants