-
-
Notifications
You must be signed in to change notification settings - Fork 30.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add an async variant of lru_cache for coroutines. #90780
Comments
Currently, decorating a coroutine with cached_property would cache the coroutine itself. But this is not useful in any way since a coroutine cannot be awaited multiple times. Running this code: import asyncio
import functools
class A:
@functools.cached_property
async def hello(self):
return 'yo'
async def main():
a = A()
print(await a.hello)
print(await a.hello)
asyncio.run(main()) produces: yo
Traceback (most recent call last):
File "t.py", line 15, in <module>
asyncio.run(main())
File "/lib/python3.10/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/lib/python3.10/asyncio/base_events.py", line 641, in run_until_complete
return future.result()
File "t.py", line 12, in main
print(await a.hello)
RuntimeError: cannot reuse already awaited coroutine The third-party cached_property package, on the other hand, detects a coroutine and caches its result instead. I feel this is a more useful behaviour. pydanny/cached-property#85 |
Pull Request is welcome! |
Hmm, this introduces some difficulties. Since a coroutine can only be awaited once, a new coroutine needs to be returned (that simply return the result when awaited) each time get is called. But this means we need a way to somehow get the result in get. If there’s a separate |
You can return a wrapper from __get__ that awaits the inner function and saves the result somewhere. |
Something like: _unset = ['unset']
class CachedAwaitable:
def __init__(self, awaitable):
self.awaitable = awaitable
self.result = _unset
def __await__(self):
if self.result is _unset:
self.result = yield from self.awaitable.__await__()
return self.result |
I have a design question. In asyncio code I have seen before,
|
I agree that Personally I’d feel this more natural: class Foo:
@functools.cache
async def go(self):
print(1)
async def main():
foo = Foo()
await foo.go()
await foo.go() Although now I just noticed this actually does not work either. Perhaps we should fix this instead and add a line in the documentation under cached_property to point people to the correct path? |
Agree. Let's start from async functions support by If we will have an agreement that cached_property is an important use-case we can return to this issue. I have a feeling that async lru_cache is much more important. https://pypi.org/project/async_lru/ has 0.5 million downloads per month: https://pypistats.org/packages/async-lru |
Note that there is a similar issue with cached generators. >>> from functools import *
>>> @lru_cache()
... def g():
... yield 1
...
>>> list(g())
[1]
>>> list(g())
[] I am not sure that it is safe to detect awaitables and iterables in caching decorators and automatically wrap them in re-awaitable and re-iterable objects. But we can add explicit decorators and combine them with arbitrary caching decorators. For example: @lru_cache()
@reiterable
def g():
yield 1 |
From my point of view, both sync and async functions can be cached. Sync and async iterators/generators are other beasts: they don't return a value but generate a series of items. The series can be long and memory-consuming, I doubt if it should be cached safely. |
If this goes forward, my strong preference is to have a separate async_lru() function just like the referenced external project. For non-async uses, overloading the current lru_cache makes it confusing to reason about. It becomes harder to describe clearly what the caches do or to show a pure python equivalent. People are already challenged to digest the current capabilities and are already finding it difficult to reason about when references are held. I don't want to add complexity, expand the docs to be more voluminous, cover the new corner cases, and add examples that don't make sense to non-async users (i.e. the vast majority of python users). Nor do I want to update the recipes for lru_cache variants to all be async aware. So, please keep this separate (it is okay to share some of the underlying implementation, but the APIs should be distinct). Also as a matter of fair and equitable policy, I am concerned about taking the core of a popular external project and putting in the standard library. That would tend to kill the external project, either stealing all its users or leaving it as something that only offers a few incremental features above those in the standard library. That is profoundly unfair to the people who created, maintained, and promoted the project. Various SC members periodically voice a desire to move functionality *out* of the standard library and into PyPI rather than the reverse. If a popular external package is meeting needs, perhaps it should be left alone. As noted by the other respondants, caching sync and async iterators/generators is venturing out on thin ice. Even if it could be done reliably (which is dubious), it is likely to be bug bait for users. Remember, we already get people who try to cache time(), random() or other impure functions. They cache methods and cannot understand why references is held for the instance. Assuredly, coroutines and futures will encounter even more misunderstandings. Also, automatic reiterability is can of worms and would likely require a PEP. Every time subject has come up before, we've decided not to go there. Let's not make a tool that makes user's lives worse. Not everything should be cached. Even if we can, it doesn't mean we should. |
Thanks, Raymond. I agree that caching of iterators and generators is out of the issue scope. Also, I agree that a separate async cache decorator should be added. I prefer the
I think this function should be a part of stdlib because the implementation shares internal Similar reasons were applied to contextlib async APIs. Third parties can have different features (time-to-live, expiration events, etc., etc.) and can be async-framework specific (work with asyncio or trio only) -- I don't care about these extensions here. My point is: stdlib has built-in lru cache support, I love it. Let's add exactly the as we have already for sync functions but for async ones. |
Another thing to point out is that existing third-party solutions (both alru_cache and cached_property) only work for asyncio, and the stdlib version (as implemented now) will be an improvement. And if the position is that the improvements should only be submitted to third-party solutions---I would need to disagree since both lru_cache and cached_property have third-party solutions predating their stdlib implementations, and it is double-standard IMO if an async solution is kept out while the sync version is kept in. |
I think that it would be simpler to add a decorator which wraps the result of an asynchronous function into an object which can be awaited more than once: def reawaitable(func):
@wraps(func)
def wrapper(*args, **kwargs):
return CachedAwaitable(func(*args, **kwargs))
return wrapper It can be combined with lru_cache and cached_property any third-party caching decorator. No access to internals of the cache is needed. @lru_cache()
@reawaitable
async def coro(...):
...
@cached_property
@reawaitable
async def prop(self):
... |
async_lru_cache() and async_cached_property() can be written using that decorator. The implementation of async_lru_cache() is complicated because the interface of lru_cache() is complicated. But it is simpler than using _lru_cache_wrapper(). def async_lru_cache(maxsize=128, typed=False):
if callable(maxsize) and isinstance(typed, bool):
user_function, maxsize = maxsize, 128
return lru_cache(maxsize, typed)(reawaitable(user_function))
def decorating_function(user_function):
return lru_cache(maxsize, typed)(reawaitable(user_function))
return decorating_function
def async_cached_property(user_function):
return cached_property(reawaitable(user_function)) |
[Andrew Svetlov]
OrderedDict provides just about everything needed to roll lru cache variants. It simply isn't true this can only be done efficiently in the standard library. [Serhiy]
This is much more sensible.
Right. The premise that this can only be done in the standard library was false.
The task becomes trivially easy :-) [Andrew Svetlov]
ISTM it was premature to ask for a PR before an idea has been thought through. We risk wasting a user's time or committing too early before simpler, better designed alternatives emerge. |
Just as a data point, as of today the async_lru package doesn't work with Python 3.10. I stumbled across this issue just now when converting some existing code to be async, and from an end user's POV, it initially seems like I can't have both at async and lru_cache at the same time. (I know that's not the case, and I could just use the fixed async_lru commit for my own code, but that might be a bridge too far for a newer Python user.) |
Marking this a closed. If needed, a new issue can be opened with Serhiy's reawaitable decorator which would be a much cleaner and more universal solution. |
But should stblib support lru_cache results of async functions or not? |
For future reference, the solution that can be pieced together from this thread is # Async support for @functools.lrucache
# From https://github.com/python/cpython/issues/90780
from functools import wraps, lru_cache
_unset = ['unset']
class CachedAwaitable:
def __init__(self, awaitable):
self.awaitable = awaitable
self.result = _unset
def __await__(self):
if self.result is _unset:
self.result = yield from self.awaitable.__await__()
return self.result
def reawaitable(func):
@wraps(func)
def wrapper(*args, **kwargs):
return CachedAwaitable(func(*args, **kwargs))
return wrapper
def async_lru_cache(maxsize=128, typed=False):
if callable(maxsize) and isinstance(typed, bool):
user_function, maxsize = maxsize, 128
return lru_cache(maxsize, typed)(reawaitable(user_function))
def decorating_function(user_function):
return lru_cache(maxsize, typed)(reawaitable(user_function))
return decorating_function With pytest testcase
|
Follow up: the code above has a bug If I update my testcase a bit async def test_async_lru():
hit_count = []
@async_lru_cache
async def coro(arg: str) -> str:
hit_count.append(arg)
await asyncio.sleep(0.01)
return arg
async def work(arg: str) -> str:
return await coro("A")
a_fut_1 = asyncio.create_task(work("A"))
a_fut_2 = asyncio.create_task(work("A"))
assert "A" == await a_fut_1
assert len(hit_count) == 1
assert "A" == await a_fut_2
assert len(hit_count) == 1
assert "A" == await coro("A")
assert "B" == await coro("B")
assert len(hit_count) == 2 I get
If I patch it up to this class CachedAwaitable(Awaitable[T]):
def __init__(self, awaitable: Awaitable[T]) -> None:
self.awaitable = awaitable
self.result: Future[T] | None = None
def __await__(self) -> Generator[Any, None, T]:
if self.result is None:
fut = asyncio.get_event_loop().create_future()
self.result = fut
result = yield from self.awaitable.__await__()
fut.set_result(result)
if not self.result.done():
yield from self.result
return self.result.result() But I can only just understand this code, so it would be nice if someone could confirm this is correct? |
@wouterdb wrote:
This implementation is problematic because it can only work with |
Ok, I don't understand why, but it does confirm that this way too subtle for me. thx |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: