Skip to content


Repository files navigation


info:Simple lru cache for asyncio
GitHub Actions CI/CD workflows status async-lru @ PyPI Matrix Room — Matrix Space —


pip install async-lru


This package is a port of Python's built-in functools.lru_cache function for asyncio. To better handle async behaviour, it also ensures multiple concurrent calls will only result in 1 call to the wrapped function, with all awaits receiving the result of that call when it completes.

import asyncio

import aiohttp
from async_lru import alru_cache

async def get_pep(num):
    resource = '' % num
    async with aiohttp.ClientSession() as session:
            async with session.get(resource) as s:
                return await
        except aiohttp.ClientError:
            return 'Not Found'

async def main():
    for n in 8, 290, 308, 320, 8, 218, 320, 279, 289, 320, 9991:
        pep = await get_pep(n)
        print(n, len(pep))

    # CacheInfo(hits=3, misses=8, maxsize=32, currsize=8)

    # closing is optional, but highly recommended
    await get_pep.cache_close()

TTL (time-to-live, expiration on timeout) is supported by accepting ttl configuration parameter (off by default):

async def func(arg):
    return arg * 2

The library supports explicit invalidation for specific function call by cache_invalidate():

async def func(arg1, arg2):
    return arg1 + arg2

func.cache_invalidate(1, arg2=2)

The method returns True if corresponding arguments set was cached already, False otherwise.

Python 3.8+ is required


The library was donated by Ocean S.A.

Thanks to the company for contribution.